Figure 1:

Figure 2:

Figure 3:
![Example of a general biometric system [8]](https://sciendo-parsed.s3.eu-central-1.amazonaws.com/6796ae9b082aa65dea3da8d4/j_jsiot-2024-0007_fig_003.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=ASIA6AP2G7AKJM7DXMN2%2F20260131%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20260131T052439Z&X-Amz-Expires=3600&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEOL%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDGV1LWNlbnRyYWwtMSJHMEUCICKcnH%2BqCCtHYmUsk0B4rSygscEKcLTGOJOYWyVOr4nBAiEAyKXfhF8Yh8BjzyYVmWZdhMV32D0eZV%2BLk%2BpONMGuFikqxAUIq%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARACGgw5NjMxMzQyODk5NDAiDFzHcdVp7wUVGHO4xCqYBVcRfIb%2F99920lmO6FyUWynI4je2zmMhk6shciID8rHXc%2Bk9ya0WOCeckcL8GBgT72fORejcbf24Qh4l%2BuGWu1DNpDxKCxKsRAdH%2Fwm2I9%2BKUA%2B8F8ngUD57FY%2FW7FmnlQhKO4Y7NCuqFzuZ09AVrdACmvB5wayz58LRqc9VwKvlNf677AjWV3QAOzB7Q2O2K4iNVt6c0oR8jlXOVeeqjFTDSdggo%2BKlKiG%2F3lXWyc6t5T%2Fu7M6jtx1rpmDXxizoAElGQ2avcJYIolPznrbs80UFcOdOG1%2BOqV6Y93LQP2mZyv%2F1Yukr2nDU6NTzf4CMWlUwBFflux9AqjcUdB10gfucA9Ng6fswU%2FLLs2KUssRJ4kJOJOwqisRzSSKEBgX7LWT38l7UHq50uer68A3d%2FIX4RNxeIowP31YUPLOG%2FrVexd4pJnDv2IFw%2BA035DE9CHYtZjW3nLGShnxBFpgIxYDr3ciPeqxh6jVZ3A3J2v9%2BSD7lSLF%2FlQmKjahIF3LtjIdnrJUV%2F7v0fDk3sJmYLK72nU27IY7KQ7gmBAA1IX7BImkVRrsqILC8bEDR87E516xgs7F0DzPhx%2BB5yfj5FBSZRZfp0jQvKhKTUQO5KY%2B%2F3cOEDCJeoB%2BprLGoQEl%2Fa7IykMYSx0%2FlHLv2Wqoz899c%2FgoQIYpArQY69xW%2FaaVuc2C1u5dYZJcsd0WkH%2Fg436ME0rWICgp5tSMpc92ii7Ri%2FIxAMTOruVYnWll0MPYe%2F%2B3EfET2vTolIXih0o2GFVBxFC5n5Sm0gEKUWCwnPjlMCrbHwLSXn7Pw2PAkYoRn7ohgzrT9Unm2ev8E6%2FmaISom0g%2FDQuGrQn1BHbhA01%2BtsfHL0zCH6Ur5v%2F0kepvFzoaoOnMHwt4wt8D1ywY6sQHI%2BZtkYBnBwrgBbOrDOLrXufG4hBng2q4oETeJm%2FR2Fx5orkTPMgyErL5BOgTNJ5LSeOX3LFoRJqbiLgi%2F0Kve%2FUyuTu9wzrzUDvMOmnR6b3vY%2F3Y1pPfQqDGXodvXw%2FW5E%2BMraqaVaEk34UT5tJLFTvCY%2BbBjNvM9ULZAegVV5MHvqmbO3vFkFwGQwPifdwnAUBhmUq5jykcn1ITlFCN%2BsAiYWV9bn2sjoCSlsPVQlbU%3D&X-Amz-Signature=8f3e4a0cdcc391cc6e70d36ffd9cb4d1ae06bb1c639c09067c408c61a9aa99b1&X-Amz-SignedHeaders=host&x-amz-checksum-mode=ENABLED&x-id=GetObject)
IRIS- BASED DEEP LEARNING MODEL USING CASIA-IRIS-THOUSAND DATASET
| Method | Year | Architecture | Accuracy | EER |
|---|---|---|---|---|
| Liu, N., et al. [140] | 2016 | DCNN | - | 0.15 |
| Nguyen, K., et al. [146] | 2017 | DCNN | 98.8% | - |
| Alaslani, M.G. [154] | 2018 | Alex-Net Model + SVM | 96.6% | - |
| Lee, Y.W., et al. [159] | 2019 | Deep ResNet | - | 1.3331 |
| Liu, Ming, et al. [162] | 2019 | DCNN | 83.1% | 0.16 |
| Chen, Y., et al. [175] | 2021 | DCNN | 99.14% | - |
| Alinia Lat, Reihan, et al. [188] | 2022 | DCNN | 99.84% | 1.87 |
FACE- BASED DEEP LEARNING RESULTS USING LFW DATASET
| Method | Year | Architecture | Accuracy | EER |
|---|---|---|---|---|
| Tian L. et. al [34] | 2016 | Multiple Scales Combined DL | 93.16% | - |
| Xiong C. et al [35] | 2016 | Deep Mixture Model (DMM), and Convolutional Fusion Network (CFN) | 87.50 % | 1.57 |
| Al-Waisy, A. S., et al. [44] | 2017 | Deep Belief Network DBN | 98.83% | 0.012 |
| Zhuang, Ni, et al [47] | 2018 | deep transfer NN | 84.34% | - |
| Santoso K, et al. [51] | 2018 | DL network using Triple loss | 95.5 | - |
| Li, Y., et al. [52] | 2018 | DCNN | 97.2% | - |
| Luo, D, et al. [54] | 2018 | deep cascaded detection method | 99.43% | 0.16 |
| Kong, J, et al. [55] | 2018 | novel DLN | 95.84% | - |
| Iqbal, M, et al. [56] | 2019 | DCNN | 99.77% | - |
| Khan, M Z., et al. [57] | 2019 | DCNN | 97.9% | - |
| Elmahmudi, A., et al. [61] | 2019 | CNN + pre-trained VGG | 99% | - |
| Wang, P., et al. [62] | 2019 | deep class-skewed learning method | 99.9% | - |
| Bendjillali, R., et al. [63] | 2019 | DCNN | 98.13% | - |
| Goel, T., et, al. [66] | 2020 | Deep Convolutional-Optimized Kernel Extreme Learning Machine (DC-OKELM | 99.2% | 0.04 |
| Zhang, J., et al. [86] | 2022 | Lightened CNN | 99.9% | - |
PERFORMANCE RESULTS OF THE BEST FINGER VEIN-BASED DEEP LEARNING MODELS
| Method | Year | Dataset | Architecture | Accuracy | EER |
|---|---|---|---|---|---|
| Nguyen, Dat Tien, et al [192] | 2017 | - | CNN + SVM | - | 0.00 |
| Chen, Cheng, et al. [195] | 2017 | Collected | DBN + CNN | 99.6% | - |
| Fang, Y. et al. [198] | 2018 | MMCBNU | DCNN | - | 0.10 |
| Wang, Jun, et al. [200] | 2018 | PolyU | CNN + SVM | - | 0.068 |
| Das, Rig, et al. [201] | 2018 | UTFVP | CNN | 98.33% | - |
| Xie, C., et al. [203] | 2019 | - | CNN + Supervised Discrete Hashing | - | 0.093 |
| Li, J., et al [204] | 2019 | SDUMLA | Graph Neural Network (GNN) | 99.98% | - |
| Zhang, J., et al. [205] | 2019 | SDUMLA | Fully Convolutional GAN + CNN | 99.15% | 0.87 |
| Hou, B., et al. [206] | 2019 | FV-USM | Convolutional Auto-Encoder (CAE) + SVM | 99.95 % | 0.12 |
| Kamaruddin, N.M., et al. [207] | 2019 | FV-USM | PCANET | 100% | - |
| Yang, W., et al. [209] | 2019 | MMCBNU | Proposed DL (multilayer extreme learning machine + binary decision diagram (BDD)) | 98.70% | - |
| Zhao, D., et al. [212] | 2020 | MMCBNU | DCNN | 99%.05 | 0.503 |
| Kuzu, R.S. [214] | 2020 | SDUMLA | DCNN + Autoencoder, | 99.99% | 0.009 |
| Kuzu, R., et al. [215] | 2020 | Collected | CNN + LSTM | 99.13%. | - |
| Boucherit, I., et al. [216] | 2020 | THU-FVFDT2 | DCNN | 99.56%. | - |
| Zhao, Jia-Yi, et al. [217] | 2020 | FV-USM | DCNN | 98% | - |
| Noh, K. J., et al. [219] | 2020 | HKPolyU | DCNN | - | 0.05 |
| Zeng, J., et al. [220] | 2020 | MMCBNU | RNN + Conditional Random Field (CRF) | - | 0.36 |
| Bilal, A., et al. [221] | 2021 | SDUMLA | DCNN | 99.84% | - |
| Shen, J, et al. [222] | 2021 | PKU-FVD | DCNN | 99.6% | 0.67 |
| Wang, K., et, al. [223] | 2021 | FV-USM | Multi-Receptive Field Bilinear CNN | 100% | - |
| Hou, B [224] | 2021 | FV-USM | DCNN | 99.79% | 0.25 |
| Huang, J., et al. [225] | 2021 | MMCBNU | Joint Attention Finger Vein Network | - | 0.08 |
| Huang, Z., et al. [230] | 2021 | SDUMLA | DCNN | 99.53% | - |
| Shaheed, K., et al. [231] | 2022 | SDUMLA | DCNN | 99% | - |
| Muthusamy, D. [232] | 2022 | SDUMLA | Deep Perceptive Fuzzy NN (DPFNN) | 98% | - |
| Hou, B., et al. [235] | 2022 | FV-USM | Triplet-Classifier GAN | 99.66% | 0.03 |
IRIS- BASED DEEP LEARNING MODEL USING IITD DATASET RESULTS
| Method | Year | Architecture | Accuracy | EER |
|---|---|---|---|---|
| Al-Waisy, Alaa S., et al. [147] | 2018 | DCNN + softmax | 100% | - |
| Alaslani, M.G. [154] | 2018 | Alex-Net + SVM | 98.3% | - |
| Chen, Ying, et al. [155] | 2019 | DCNN + softmax | 98.1% | - |
| Liu, Ming, et al. [162] | 2019 | DCNN | 86.8% | - |
| Chen, Y., et al. [173] | 2020 | DCNN | 99.3% | 0.74 |
| Chen, Y., et al. [175] | 2021 | DCNN | 97.24% | 0.18 |
| Chen, Ying, et al. [181] | 2021 | DenseSENet | 99.06% | 0.945 |
| Alinia Lat, Reihan, et al. [188] | 2022 | DCNN | 99.99% | 0.45 |
IRIS- BASED DEEP LEARNING MODEL USING MULTIPLE KINDS OF IRIS DATASETS
| Dataset | Method | Architecture | Accuracy | EER |
|---|---|---|---|---|
| CASIA-V4 | He, Fei, et al. [142] | Gabor + DBN | 99.998% | - |
| Wang, Zi, et al. [150] | Convolutional and Residual network | 99.08% | - | |
| Zhang, Wei, et al. [161] | Fully Dilated U-Net (FD-UNet) | 97.36% | - | |
| Azam, M.S., et al. [171] | DCNN + SVM | 96.3% | - | |
| Chen, Y., et al. [175] | DCNN | 97.35% | 1.05 | |
| UBIRIS | Proença, H. et al. [145] | DCNN | 99.8% | 0.019 |
| Wang, Zi, et al. [150] | Convolutional and Residual network | 96.12% | - | |
| Zhang, Wei, et al. [161] | Fully Dilated U-Net (FD-UNet) | 94.81% | - | |
| Shirke, S.D., et al. [178] | DBN | 97.9% | - | |
| ND | Nguyen, Kien, et, al. [146] | Pre-trained CNNs | 98.7% | - |
| Zhang, Wei, et al [161] | Fully Dilated U-Net (FD-UNet) | 96.74% | - | |
BIOMETRIC-BASED SYSTEMS REQUIREMENTS
| Universality | All authorized individuals must have the utilized biometric trait |
| Distinctiveness | No two authorized individuals have similar characteristics of the trait |
| Permanence | The obtained trait doesn't change for a specific duration of time |
| Performance | Identified in the achieved Security, speed, accuracy, and robustness |
| Acceptability | Agreed by the individual's population without an interception |
| Circumvention | The degree ability of to use a fake biometric |
| Collectability | The simplicity of gathering traits samples in a comfortable manner for the individual |
BIOMETRICS FEATURES AND APPLICATIONS
| Biometric trait | Significant Features | Applications |
|---|---|---|
| Face |
|
|
| Fingerprint |
|
|
| Iris |
|
|
| Finger vein |
|
|
FACE- BASED DEEP LEARNING RESULTS USING Yale and Yale FACE B DATASET
| Method | Year | Architecture | Accuracy | EER |
|---|---|---|---|---|
| Tripathi, B. K. [46] | 2017 | One-Class-in-One-Neuron (OCON) DL | 97.4 % | - |
| Kong, J, et, al. [55] | 2018 | Novel DLN | 100% | - |
| Görgel, P., et al. [58] | 2019 | Deep Stacked De-Noising Sparse Auto encoders (DS-DSA) | 98.16% | - |
| Li, Y. K., et al. [60] | 2019 | DL network L1-2D2PCANet | 96.86% | 0.77 |
| Goel, T., et, al. [66] | 2020 | Deep Convolutional-Optimized Kernel Extreme Learning Machine (DC-OKELM) | - | 6.67 |
PERFORMANCE RESULTS OF THE BEST FINGERPRINT- BASED DEEP LEARNING MODELS
| Method | Year | Dataset | Architecture | Accuracy | EER |
|---|---|---|---|---|---|
| Kim, S., et al. [91] | 2016 | Collected | DBN | 99.4% | - |
| Jeon, W. S. et al. [95] | 2017 | FVC | DCNN | 97.2% | - |
| Wang, Z., et al. [96] | 2017 | NIST | Novel approach (D-LVQ) | 99.075% | - |
| Peralta, D., et al. [100] | 2018 | Collected | DCNN | 99.6% | - |
| Yu, Y., et al. [101] | 2018 | Collected | DCNN | 96.46% | - |
| Lin, C., et al. [102] | 2018 | - | DCNN | 99.89% | 0.64 |
| Jung, H. Y., et al [103] | 2018 | - | DCNN | 98.6% | - |
| Yuan, C, et al [111] | 2019 | LivDet 2013 | Deep Residual Network (DRN) | 97.04% | - |
| Haider, Amir, et al. [115] | 2019 | Collected | DCNN | 95.94% | - |
| Song, D., et al. [116] | 2019 | Collected | 1-D CNN | - | 0.06 |
| Uliyan, D.M., et al. [118] | 2020 | LivDet 2013 | Deep Boltzmann Machines along with KNN | 96% | - |
| Liu, Feng, et al. [119] | 2020 | - | DeepPoreID | - | 0.16 |
| Yang, X., et al. [120] | 2020 | Collected | DCNN | 97.1% | - |
| Arora, S., et al. [122] | 2020 | DigitalPersona 2015 | DCNN | 99.80% | - |
| Zhang, Z., et al. [124] | 2021 | - | DCNN | 98.24% | - |
| Ahsan, M., et al. [125] | 2021 | Collected | Gabor filtering and DCNN+ PCA | 99.87% | 4.28 |
| Leghari, M., et, al. [126] | 2021 | Collected | DCNN | 99.87% | - |
| Li, H. [127] | 2021 | NIST | DCNN | 98.65% | - |
| Lee, Samuel, et al [129] | 2021 | NIST | Proposed Pix2Pix DL model | 100% | - |
| Nahar, P., et al. [131] | 2021 | - | DCNN | 99.1% | - |
| Ibrahim, A.M., et al. [132] | 2021 | - | DCNN | 99.22% | - |
| Gustisyaf, A.I., et al. [133] | 2021 | Collected | DCNN | 99.9667% | - |
| Yuan, C., Yu, et al. [135] | 2022 | - | DCNN | - | 0.3 |
| Saeed, F., et, al [137] | 0 | FVC | DCNN | 98. | - |
| 2 | 89% | ||||
| 2 |
The ADVANTAGES AND DISADVANTGES OF THE MOST WIDLY USEED DEP LEARNING ARCHITECTURES
| Architecture | Advantages | Disadvantages |
|---|---|---|
| CNN |
|
|
| RNN |
|
|
| LSTM |
|
|
| GRU |
|
|
| AE |
|
|
| DBN |
|
|
| GAN |
|
|