Have a personal or library account? Click to login
Deep Learning Models for Biometric Recognition based on Face, Finger vein, Fingerprint, and Iris: A Survey Cover

Deep Learning Models for Biometric Recognition based on Face, Finger vein, Fingerprint, and Iris: A Survey

Open Access
|Jun 2024

Figures & Tables

Figure 1:

Organization of the survey
Organization of the survey

Figure 2:

Biometric Traits
Biometric Traits

Figure 3:

Example of a general biometric system [8]
Example of a general biometric system [8]

IRIS- BASED DEEP LEARNING MODEL USING CASIA-IRIS-THOUSAND DATASET

MethodYearArchitectureAccuracyEER
Liu, N., et al. [140]2016DCNN-0.15
Nguyen, K., et al. [146]2017DCNN98.8%-
Alaslani, M.G. [154]2018Alex-Net Model + SVM96.6%-
Lee, Y.W., et al. [159]2019Deep ResNet-1.3331
Liu, Ming, et al. [162]2019DCNN83.1%0.16
Chen, Y., et al. [175]2021DCNN99.14%-
Alinia Lat, Reihan, et al. [188]2022DCNN99.84%1.87

FACE- BASED DEEP LEARNING RESULTS USING LFW DATASET

MethodYearArchitectureAccuracyEER
Tian L. et. al [34]2016Multiple Scales Combined DL93.16%-
Xiong C. et al [35]2016Deep Mixture Model (DMM), and Convolutional Fusion Network (CFN)87.50 %1.57
Al-Waisy, A. S., et al. [44]2017Deep Belief Network DBN98.83%0.012
Zhuang, Ni, et al [47]2018deep transfer NN84.34%-
Santoso K, et al. [51]2018DL network using Triple loss95.5-
Li, Y., et al. [52]2018DCNN97.2%-
Luo, D, et al. [54]2018deep cascaded detection method99.43%0.16
Kong, J, et al. [55]2018novel DLN95.84%-
Iqbal, M, et al. [56]2019DCNN99.77%-
Khan, M Z., et al. [57]2019DCNN97.9%-
Elmahmudi, A., et al. [61]2019CNN + pre-trained VGG99%-
Wang, P., et al. [62]2019deep class-skewed learning method99.9%-
Bendjillali, R., et al. [63]2019DCNN98.13%-
Goel, T., et, al. [66]2020Deep Convolutional-Optimized Kernel Extreme Learning Machine (DC-OKELM99.2%0.04
Zhang, J., et al. [86]2022Lightened CNN99.9%-

PERFORMANCE RESULTS OF THE BEST FINGER VEIN-BASED DEEP LEARNING MODELS

MethodYearDatasetArchitectureAccuracyEER
Nguyen, Dat Tien, et al [192]2017-CNN + SVM-0.00
Chen, Cheng, et al. [195]2017CollectedDBN + CNN99.6%-
Fang, Y. et al. [198]2018MMCBNUDCNN-0.10
Wang, Jun, et al. [200]2018PolyUCNN + SVM-0.068
Das, Rig, et al. [201]2018UTFVPCNN98.33%-
Xie, C., et al. [203]2019-CNN + Supervised Discrete Hashing-0.093
Li, J., et al [204]2019SDUMLAGraph Neural Network (GNN)99.98%-
Zhang, J., et al. [205]2019SDUMLAFully Convolutional GAN + CNN99.15%0.87
Hou, B., et al. [206]2019FV-USMConvolutional Auto-Encoder (CAE) + SVM99.95 %0.12
Kamaruddin, N.M., et al. [207]2019FV-USMPCANET100%-
Yang, W., et al. [209]2019MMCBNUProposed DL (multilayer extreme learning machine + binary decision diagram (BDD))98.70%-
Zhao, D., et al. [212]2020MMCBNUDCNN99%.050.503
Kuzu, R.S. [214]2020SDUMLADCNN + Autoencoder,99.99%0.009
Kuzu, R., et al. [215]2020CollectedCNN + LSTM99.13%.-
Boucherit, I., et al. [216]2020THU-FVFDT2DCNN99.56%.-
Zhao, Jia-Yi, et al. [217]2020FV-USMDCNN98%-
Noh, K. J., et al. [219]2020HKPolyUDCNN-0.05
Zeng, J., et al. [220]2020MMCBNURNN + Conditional Random Field (CRF)-0.36
Bilal, A., et al. [221]2021SDUMLADCNN99.84%-
Shen, J, et al. [222]2021PKU-FVDDCNN99.6%0.67
Wang, K., et, al. [223]2021FV-USMMulti-Receptive Field Bilinear CNN100%-
Hou, B [224]2021FV-USMDCNN99.79%0.25
Huang, J., et al. [225]2021MMCBNUJoint Attention Finger Vein Network-0.08
Huang, Z., et al. [230]2021SDUMLADCNN99.53%-
Shaheed, K., et al. [231]2022SDUMLADCNN99%-
Muthusamy, D. [232]2022SDUMLADeep Perceptive Fuzzy NN (DPFNN)98%-
Hou, B., et al. [235]2022FV-USMTriplet-Classifier GAN99.66%0.03

IRIS- BASED DEEP LEARNING MODEL USING IITD DATASET RESULTS

MethodYearArchitectureAccuracyEER
Al-Waisy, Alaa S., et al. [147]2018DCNN + softmax100%-
Alaslani, M.G. [154]2018Alex-Net + SVM98.3%-
Chen, Ying, et al. [155]2019DCNN + softmax98.1%-
Liu, Ming, et al. [162]2019DCNN86.8%-
Chen, Y., et al. [173]2020DCNN99.3%0.74
Chen, Y., et al. [175]2021DCNN97.24%0.18
Chen, Ying, et al. [181]2021DenseSENet99.06%0.945
Alinia Lat, Reihan, et al. [188]2022DCNN99.99%0.45

IRIS- BASED DEEP LEARNING MODEL USING MULTIPLE KINDS OF IRIS DATASETS

DatasetMethodArchitectureAccuracyEER
CASIA-V4He, Fei, et al. [142]Gabor + DBN99.998%-
Wang, Zi, et al. [150]Convolutional and Residual network99.08%-
Zhang, Wei, et al. [161]Fully Dilated U-Net (FD-UNet)97.36%-
Azam, M.S., et al. [171]DCNN + SVM96.3%-
Chen, Y., et al. [175]DCNN97.35%1.05

UBIRISProença, H. et al. [145]DCNN99.8%0.019
Wang, Zi, et al. [150]Convolutional and Residual network96.12%-
Zhang, Wei, et al. [161]Fully Dilated U-Net (FD-UNet)94.81%-
Shirke, S.D., et al. [178]DBN97.9%-

NDNguyen, Kien, et, al. [146]Pre-trained CNNs98.7%-
Zhang, Wei, et al [161]Fully Dilated U-Net (FD-UNet)96.74%-

BIOMETRIC-BASED SYSTEMS REQUIREMENTS

UniversalityAll authorized individuals must have the utilized biometric trait
DistinctivenessNo two authorized individuals have similar characteristics of the trait
PermanenceThe obtained trait doesn't change for a specific duration of time
PerformanceIdentified in the achieved Security, speed, accuracy, and robustness
AcceptabilityAgreed by the individual's population without an interception
CircumventionThe degree ability of to use a fake biometric
CollectabilityThe simplicity of gathering traits samples in a comfortable manner for the individual

BIOMETRICS FEATURES AND APPLICATIONS

Biometric traitSignificant FeaturesApplications
Face
  • No need for physical friction

  • Easy in keeping templet.

  • Comfortable, statistics less complicated

  • Rapid identification procedure

  • Changes depending on time, age, incidental events,

  • Differences between twins are difficult.

  • Affected by lighting in the surrounding environment.

  • May be partially occluded by other objects

  • Access control

  • Face ID

  • Interaction within computer

  • Criminal determination

  • Monitoring

  • Smart cards

Fingerprint
  • Modern, reliable, safe, highly accurate and less cost

  • Rapid matching

  • Need small memory space.

  • Affected by wound, dust, twists.

  • Need a physical communication

  • Authentication of the driver

  • Criminals' determination and forensics

  • Authentication in both license and visa cards

  • Access control

Iris
  • Scalable, accurate and highly covered

  • Samples of small size

  • Rapid processing and maximum cost

  • Have unparalleled structure.

  • Remains stable throughout the life

  • Difficult to adjust.

  • High randomness

  • No physical contact is needed and just user collaboration.

  • Hidden by some eye parts such as lashes.

  • Affected by some illness conditions

  • Criminals' determination, and forensics

  • Identification

  • Access control

  • National security determining in all of seaports, land, and airports

Finger vein
  • Sanitary without any touch

  • Highly accurate and hard to spoof.

  • Unique

  • Affected by body temperature.

  • Affected by some diseases.

  • Tiny size of template

  • Minimum processing

  • Driver identification

  • Door's security login

  • Bank services

  • Physical access monitoring and attendance time

  • Airports, hospitals, schools

FACE- BASED DEEP LEARNING RESULTS USING Yale and Yale FACE B DATASET

MethodYearArchitectureAccuracyEER
Tripathi, B. K. [46]2017One-Class-in-One-Neuron (OCON) DL97.4 %-
Kong, J, et, al. [55]2018Novel DLN100%-
Görgel, P., et al. [58]2019Deep Stacked De-Noising Sparse Auto encoders (DS-DSA)98.16%-
Li, Y. K., et al. [60]2019DL network L1-2D2PCANet96.86%0.77
Goel, T., et, al. [66]2020Deep Convolutional-Optimized Kernel Extreme Learning Machine (DC-OKELM)-6.67

PERFORMANCE RESULTS OF THE BEST FINGERPRINT- BASED DEEP LEARNING MODELS

MethodYearDatasetArchitectureAccuracyEER
Kim, S., et al. [91]2016CollectedDBN99.4%-
Jeon, W. S. et al. [95]2017FVCDCNN97.2%-
Wang, Z., et al. [96]2017NISTNovel approach (D-LVQ)99.075%-
Peralta, D., et al. [100]2018CollectedDCNN99.6%-
Yu, Y., et al. [101]2018CollectedDCNN96.46%-
Lin, C., et al. [102]2018-DCNN99.89%0.64
Jung, H. Y., et al [103]2018-DCNN98.6%-
Yuan, C, et al [111]2019LivDet 2013Deep Residual Network (DRN)97.04%-
Haider, Amir, et al. [115]2019CollectedDCNN95.94%-
Song, D., et al. [116]2019Collected1-D CNN-0.06
Uliyan, D.M., et al. [118]2020LivDet 2013Deep Boltzmann Machines along with KNN96%-
Liu, Feng, et al. [119]2020-DeepPoreID-0.16
Yang, X., et al. [120]2020CollectedDCNN97.1%-
Arora, S., et al. [122]2020DigitalPersona 2015DCNN99.80%-
Zhang, Z., et al. [124]2021-DCNN98.24%-
Ahsan, M., et al. [125]2021CollectedGabor filtering and DCNN+ PCA99.87%4.28
Leghari, M., et, al. [126]2021CollectedDCNN99.87%-
Li, H. [127]2021NISTDCNN98.65%-
Lee, Samuel, et al [129]2021NISTProposed Pix2Pix DL model100%-
Nahar, P., et al. [131]2021-DCNN99.1%-
Ibrahim, A.M., et al. [132]2021-DCNN99.22%-
Gustisyaf, A.I., et al. [133]2021CollectedDCNN99.9667%-
Yuan, C., Yu, et al. [135]2022-DCNN-0.3
Saeed, F., et, al [137]0FVCDCNN98.-
2 89%
2

The ADVANTAGES AND DISADVANTGES OF THE MOST WIDLY USEED DEP LEARNING ARCHITECTURES

ArchitectureAdvantagesDisadvantages
CNN
  • Unsupervised feature learning

  • Low complexity due to count of parameters and sharing of weights.

  • High performance in recognition and classification of images

  • Large dataset required.

  • Long training time

  • Unable to deal with input variations (i.e., orientation, position, environment)

RNN
  • Can remember and learn from past data, to give better prediction.

  • The ability to capture long sequences patterns in the data of large size.

  • Often utilized for natural language processing tasks

  • computationally expensive

  • more porn to overfitting and vanishing gradient problems.

  • hard to optimize due to the large count of layers and parameters.

LSTM
  • Better attitude in dealing with long-term dependencies.

  • Utilized LSTM cell as activation function so it's less susceptible to the vanishing gradient problem.

  • Very effective at modeling complex sequential data.

  • More complicated than RNNs

  • require more training data in order to learn effectively.

  • Not suited for prediction or classification tasks.

  • Slow on large datasets training.

  • Not work effectively for all kinds of data such as nonlinear or noisy ones.

GRU
  • Uses less memory and is faster than LSTM.

  • Has fewer parameters than LSTM

  • low learning efficiency, due to the slow convergence rate

  • too long training time

  • may suffer from under-fitting problem

AE
  • Unsupervised and doesn't need labeled data for training.

  • Convert the high dimension data into low dimension features.

  • High scalability with the increase of data.

  • minimize the noise of entered data

  • high complexity

  • computationally expensive,

  • need large training dataset.

  • causes losses in interpretability, when representing features in a latent space

DBN
  • Unsupervised feature learning

  • robust in classification (size, position, color, view angle – rotation).

  • implemented in many kinds of dataset.

  • resistant to overfitting due to the RBMs' contribution to model regularization.

  • Can manage missing data

  • high complexity

  • computationally expensive

  • need large training dataset.

GAN
  • Can deal with partially labelled data.

  • Efficient generation of samples which looks like the original one.

  • used in generating images and videos.

  • Hard to be trained due to the need for different data types in a continuous manner.

  • training cannot be completed when having missing pattern.

  • have difficulties in dealing with discrete data (e.g., text)

Language: English
Page range: 117 - 157
Submitted on: May 23, 2024
Accepted on: Jun 7, 2024
Published on: Jun 15, 2024
Published by: Future Sciences For Digital Publishing
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2024 Saif Mohanad Kadhim, Johnny Koh Siaw Paw, Yaw Chong Tak, Shahad Ameen, published by Future Sciences For Digital Publishing
This work is licensed under the Creative Commons Attribution 4.0 License.