Skip to main content
Have a personal or library account? Click to login
Lightweight inception-UNet with attention mechanisms for semantic segmentation Cover

Lightweight inception-UNet with attention mechanisms for semantic segmentation

Open Access
|Apr 2026

Figures & Tables

Figure 1:

Framework of the proposed LWA-MoDUNet.

Figure 2:

Architecture of the encoder block of the proposed LWA-ModUNet.

Figure 3:

Architecture of the inception block of the proposed LWA-ModUNet [54].

Figure 4:

Decoder architecture of the proposed LWA-MoDUnet.

Figure 5:

Diagram of additive attention gate [55].

Figure 6:

Flowchart of the proposed LWA-MoDUNet depicting the encoding and decoding phases with inception and attention modules.

Figure 7:

Sample images of considered datasets (Row 1) along with their corresponding GT (Row 2). GT, ground truth; IDD-Lite, Indian driving dataset-lite; MADS, martial arts, dancing and sports.

Figure 8:

Generated output masks by UNet, E-Net, SegNet, UNet with ResNet-18 encoder, UNet with ResNet-34 encoder, and LWA-ModUNet models on exemplary images of autorickshaw dataset. GT, ground truth.

Figure 9:

Generated output masks by UNet, E-Net, SegNet, UNet with ResNet-18 encoder, UNet with ResNet-34 encoder, and LWA-ModUNet on exemplary images of IDD Lite dataset. GT, ground truth; IDD-Lite, Indian driving dataset-lite.

Figure 10:

Generated output masks by UNet, E-Net, SegNet, UNet with ResNet-18 encoder, UNet with ResNet-34 encoder, and LWA-ModUNet models on exemplary images of PETS dataset. GT, ground truth.

Figure 11:

Generated output masks by UNet, E-Net, SegNet, UNet with ResNet-18 encoder, UNet with ResNet-34 encoder, and LWA-ModUNet models on exemplary images of MADS dataset. GT, ground truth; MADS, martial arts, dancing and sports.

Figure 12:

Generated output masks by UNet, E-Net, SegNet, UNet with ResNet-18 encoder, UNet with ResNet-34 encoder, and LWA-ModUNet models on exemplary images of CT-Liver dataset. GT, ground truth.

Figure 13:

Comparative analysis of IoU score on training and validation images taken from autorickshaw dataset. IoU, intersection over union.

Figure 14:

Comparative analysis of IoU score on training and validation images taken from IDD-Lite dataset. IDD-Lite, Indian driving dataset-lite; IoU, intersection over union.

Figure 15:

Comparative analysis of IoU score on training and validation images taken from PETS dataset. IoU, intersection over union.

Figure 16:

Comparative analysis of IoU score on training and validation images taken from MADS dataset. IoU, intersection over union; MADS, martial arts, dancing and sports.

Figure 17:

Comparative analysis of IoU score on training and validation images taken from CT-Liver dataset. IoU, intersection over union.

Figure 18:

Model type 1 and type 2 errors comparison across datasets. IDD-Lite, Indian driving dataset-lite; MADS, martial arts, dancing and sports.

Comparison of key performance indicators of LWA-MoDUNet and SOTA models on autorickshaw dataset

MeasuresUNetUNet-ResNet34UNet-ResNet18E-NetSegNetLWA-MoDUNet
IoU score0.76650.84590.83560.84800.75920.8768
Err0.13260.08490.09120.08570.14030.0663
Acc86.741291.506190.883791.434985.967793.3691
Sp0.86970.93030.92410.95140.87890.9429
Ss0.86510.90090.89460.88290.84230.9248
F-score0.86780.91650.91040.91770.86310.9344
Cc0.73480.83060.81820.83150.72030.8676

Comparison of key performance indicators of LWA-MoDUNet and compared models on IDD-Lite dataset

MeasuresUNetUNet-ResNet34UNet-ResNet18E-NetSegNetLWA-MoDUNet
IoU score0.60310.61740.59810.5660.30760.9283
Acc0.92030.93980.93560.93210.89710.9616
Err0.07160.06010.06430.06790.10280.00435
Sp0.95000.94840.94690.93950.89750.9328
Ss0.85340.86170.84720.86690.88960.9436
F-score0.70560.76350.74850.72290.47050.8909
Cc0.67940.73710.71860.69790.49650.8153

Comparative analysis of computational complexity

ModelMeasuresInference time (ms/image)
FCN-8s [28]134.2786.1
SegNet [29]29.4660.7
DeconvNet [31]251.84214.3
U-Net [30]31.0347.2
ResUNet [51]8.1055.8
LWA-ModUNet5.1736.4

Ablation study of LWA-MoDUNet on autorickshaw dataset

MeasureLWA-MoDUNetW/o inception moduleW/o attention
IoU score0.87680.67290.8173
Error0.06630.14520.0927
Accuracy93.369178.796686.4217
F-score0.93440.79130.8786

Statistical performance analysis of LWA-MoDUNeT against other compared SOTA models

ModelsSamples size = 30 p-valueSamples size = 50 p-value
UNet and LWA-MoDUNet0.0029 × E−053.54 × E−09
ResNet34 and LWA-ModUNet8.49 × E−047.28 × E−06
ENet and LWA-MoDUNet3.04 × E−029.51 × E−03
SegNet and LWA-ModUNet4.85 × E−026.39 × E−04

Comparison of key performance indicators of LWA-MoDUNet and compared models on PETS dataset

MeasuresUNetUNet-ResNet34UNet-ResNet18E-NetSegNetLWA-MoDUNet
IoU score0.76650.84590.83560.84800.75920.8768
Err0.13260.08490.09120.08570.14030.0663
Acc86.741291.506190.883791.434985.967793.3691
Sp0.86970.93030.92410.95140.87890.9429
Ss0.86510.90090.89460.88290.84230.9248
F-score0.86780.91650.91040.91770.86310.9344
Cc0.73480.83060.81820.83150.72030.8676

Comparison of key performance indicators of LWA-MoDUNet and compared models on CT-Liver dataset

MeasuresUNetUNet-ResNet34UNet-ResNet18E-NetSegNetLWA-MoDUNet
IoU score0.88390.94450.95650.93490.94950.9650
Err0.06160.02870.02230.03380.02620.0178
Acc93.840097.130897.768696.616797.380898.2162
Sp0.93840.97580.98110.97190.98450.9855
Ss0.93840.96690.97430.96050.96350.9788
F-score0.93840.97140.97780.96640.97410.9822
Cc0.87680.94270.95540.93240.94780.9643

Comparison of key performance indicators of LWA-MoDUNet and compared models on MADS dataset

MeasuresUNetUNet-ResNet34UNet-ResNet18E-NetSegNetLWA-MoDUNet
IoU score0.89680.94480.90840.96090.97210.9768
Err0.05450.02900.04960.02010.01420.0118
Acc94.552097.104295.042897.991998.578198.8225
Sp0.94640.99050.98180.98720.99160.9934
Ss0.94470.95310.92290.97280.98010.9832
F-score0.94560.97160.95200.98010.98590.9883
Cc0.89100.94280.90280.95990.97160.9765
Language: English
Submitted on: Jul 11, 2025
Published on: Apr 10, 2026
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Twinkle Tiwari, Mukesh Saraswat, published by Macquarie University, Australia
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.