Have a personal or library account? Click to login
Deep learning in public health: evaluating anemia detection methods Cover

Figures & Tables

Figure 1:

Methodology.
Methodology.

Figure 2:

Site location in Dehradun district.
Site location in Dehradun district.

Figure 3:

UPES to Dakrani.
UPES to Dakrani.

Figure 4:

UPES to Jamnipur.
UPES to Jamnipur.

Figure 5:

UPES to Atthorwala.
UPES to Atthorwala.

Figure 6:

UPES to Nawabgarh.
UPES to Nawabgarh.

Figure 7:

UPES to Mazri Grant.
UPES to Mazri Grant.

Figure 8:

UPES to Shyampur.
UPES to Shyampur.

Figure 9:

UPES to Charba.
UPES to Charba.

Figure 10:

Data preprocessing.
Data preprocessing.

Figure 11:

CNN architecture. CNN, convolutional neural network.
CNN architecture. CNN, convolutional neural network.

Figure 12:

VGG16 architecture.
VGG16 architecture.

Figure 13:

Inception V3 architecture.
Inception V3 architecture.

Figure 14:

Applied framework.
Applied framework.

Figure 15:

Evaluation metrics for deep learning models.
Evaluation metrics for deep learning models.

Figure 16:

(A) presents the training and validation accuracy for each epoch, (B) shows the training and validation loss for each epoch, and (C) provides the confusion matrix for each class in the InceptionV3 Model.
(A) presents the training and validation accuracy for each epoch, (B) shows the training and validation loss for each epoch, and (C) provides the confusion matrix for each class in the InceptionV3 Model.

Figure 17:

(A) presents the accuracy of training and validation for each epoch, (B) illustrates the loss during training and validation for each epoch, and (C) shows the confusion matrix for each class from the VGG16 Model.
(A) presents the accuracy of training and validation for each epoch, (B) illustrates the loss during training and validation for each epoch, and (C) shows the confusion matrix for each class from the VGG16 Model.

Figure 18:

(A) presents the training and validation accuracy per epoch, (B) shows the training and validation loss for each epoch, and (C) provides the confusion matrix for each class from the DenseNET121 model.
(A) presents the training and validation accuracy per epoch, (B) shows the training and validation loss for each epoch, and (C) provides the confusion matrix for each class from the DenseNET121 model.

About blood cells

Cell categorySizeBlood life spanQuantity of cellsFunctions
RBC6–8120 daysMale:—4.5–6.5 × 106Conveyance of O2 and CO2
Female 3.9–5.6 × 106
Thrombocytes0.5–3.010 days140–1,400 × 103Coagulation
Phagocytes
Neutrophils12–156–10 hr1.9–7.6 × 103 (48%–76%)Protection against microorganisms such as fungi and bacteria
Monocytes12–2020–40 hr0.2–0.8 × 103 (2.5%–8.5%)Defense against pathogens like fungi and bacteria
Acidophils12–15Days0.04–0.44 × 013 (<5%)Defence from pathogens
Lymphocyte7–9 (resting)Weeks or years1.5–3.5 × 103 (18%–41%)B-cells: Assist in antibody production and the activation of T-cells.
12–20 (active) T-cells: Involved in viral defense and immune response

Compare accuracy and time for each epoch based on three deep learning models

ModelModel parameters and results
Number of epochsAccuracy (%)Batch sizeTime for each epoch (in ms)
VGG169093.433278 ms/step
DenseNet9090.483289 ms/step
InceptionV39078.8032100 ms/step

Summary of significant paper for last decade

YearPaper refData setDeep learning modelsAccuracy (%)
2016[12]All four classifiers are trained with dataset of 500 instances.ANN, DT, k-NN, and NB96.63
2023[13]The datasets were collected from10 health facilities across the countryA machine learning approach was used to detect iron-deficiency anemia with the application of NB, CNN, SVM, k-NN, and DT algorithms99.12
2023[16]The suggested technique makes use of time-domain analysis to determine the relationship between blood hemoglobin concentration and palm color variations brought on by applying and releasing pressure.Dual-mode information fusion with pre-trained CNN models and transformer96.29
Processed and analyzed is a smartphone camera sensor that captures the entire event of palm color changes produced by a bespoke gadget.
2023[11]A specially constructed dataset of 2,592 pediatric palpebral images was used in the investigationUCE, UNet+ +, FCN, PSPNet, and Link Net.94.14
2023[15]After beginning with 527 datasets, the experiment added 2,635 more by utilizing translation, flipping, and rotation.CNN, k-NN, Naive Bayes’, SVM, and DT were used to build the suggested models for the identification of anemia.99.92
2024[14]The proposed study used a larger size of dataset of 527 conjunctiva images and was then augmented to 2,635Machine learning algorithms such as CNN, k-NN, NB, DT, and SVM were utilized for the study to detect anemia98.45
Language: English
Submitted on: Aug 23, 2025
|
Published on: Feb 17, 2026
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Neelu Jyothi Ahuja, Jyoti Upadhyay, Tanupriya Choudhury, Ashish Jain, Bhupesh Kumar Dewangan, Ketan Kotecha, published by Professor Subhas Chandra Mukhopadhyay
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.