Figure 1:

Figure 2:

Figure 3:

Figure 4:

Figure 5:

Figure 6:

Figure 7:

Figure 8:

Figure 9:

Figure 10:

Figure 11:

Figure 12:

Figure 13:

Metrics showing performance metric of binary and multi-class classification_
| DL classifier algorithm | Class/dataset | Maximum training accuracy (%) | Maximum validation accuracy (%) |
|---|---|---|---|
| Modified VGG-16 classifier | Binary (ALL-IDB) | 98.50 | 68.33 |
| Modified InceptionNet classifier | 98.50 | 78.33 | |
| Ensemble classifier | 94.50 | 83.33 | |
| Modified VGG-16 classifier | Multi-class (real-images) | 98.56 | 93.20 |
| Modified InceptionNet classifier | 99.76 | 97.87 | |
| Ensemble classifier | 100 | 100 |
Comparison of the proposed approach with popular SOTA_
| Advantage criteria | Ensemble (VGG-16 + inception) | Pre-trained VGG-16 | Pre-trained inception | Random forest | SVM | ResNet50 (deep learning) | EfficientNet (deep learning) |
|---|---|---|---|---|---|---|---|
| Diversity in features | Yes | No | Yes | Yes | No | Yes | Yes |
| Generalization performance | Good | Good | Good | Good | Good | Excellent | Excellent |
| Robustness to overfitting | High | High | High | High | Moderate | High | High |
| Ensemble averaging benefit | Yes | No | No | No | No | No | No |
| Feature learning capabilities | Rich | Deep hierarchical | Diverse | Moderate | Linear | Deep hierarchical | Diverse |
| State-of-the-art performance | Yes | No (dated architecture) | Yes (at the time) | No | No | Yes | Yes (as of the time of training) |