Have a personal or library account? Click to login
Brief Overview of Neural Networks for Medical Applications Cover

Brief Overview of Neural Networks for Medical Applications

Open Access
|Aug 2022

References

  1. [1] K. S. CHOI and L. SUNWOO. “Artificial Intelligence in Neuroimaging: Clinical Applications”. In: Investigative Magnetic Resonance Imaging 26.1 (2022), pp. 1–9.
  2. [2] D. S. LARA-MARTINEZ et al. “Artificial intelligence opportunities in cardio-oncology: Overview with spotlight on electrocardiography”. In: American Heart Journal Plus: Cardiology Research and Practice (2022), p. 100129.10.1016/j.ahjo.2022.100129920299635721662
  3. [3] T. B. SHAMS et al. “EEG-based Biometric Authentication Using Machine Learning: A Comprehensive Survey”. In: ().
  4. [4] G. CHOY et al. “Current applications and future impact of machine learning in radiology”. In: Radiology 288.2 (2018), pp. 318–328.
  5. [5] M. L. GIGER. “Machine learning in medical imaging”. In: Journal of the American College of Radiology 15.3 (2018), pp. 512–520.10.1016/j.jacr.2017.12.02829398494
  6. [6] A. GHORBANI, A. ABID, and J. ZOU. “Interpretation of neural networks is fragile”. In: Proceedings of the AAAI conference on artificial intelligence. Vol. 33. 01. 2019, pp. 3681–3688.10.1609/aaai.v33i01.33013681
  7. [7] I. GOODFELLOW, Y. BENGIO, and A. COURVILLE. Deep Learning. http://www.deeplearningbook.org. MIT Press, 2016. ISBN: 9780262035613.
  8. [8] G. CYBENKO. “Approximation by superpositions of a sigmoidal function”. In: Mathematics of Control, Signals, and Systems (MCSS) 2.4 (Dec. 1989), pp. 303–314. ISSN: 0932-4194.10.1007/BF02551274
  9. [9] K. HORNIK, M. STINCHCOMBE, and H. WHITE. “Multilayer feedforward networks are universal approximators”. In: Neural Networks 2.5 (1989), pp. 359–366. ISSN: 0893-6080.
  10. [10] M. A. KRAMER. “Nonlinear principal component analysis using autoassociative neural networks”. In: AIChE Journal 7 (Feb. 1991), pp. 233–243.10.1002/aic.690370209
  11. [11] S. CHAURASIA, S. GOYAL, and M. RAJPUT. “Outlier Detection Using Autoencoder Ensembles: A Robust Unsupervised Approach”. In: 2020 International Conference on Contemporary Computing and Applications (IC3A) (2020), pp. 76–80.
  12. [12] D. H. HUBEL and T. N. WIESEL. “Receptive fields and functional architecture of monkey striate cortex”. In: The Journal of physiology 195.1 (1968), pp. 215–243.
  13. [13] C. C. AGGARWAL et al. “Neural networks and deep learning”. In: Springer 10 (2018), pp. 978–3.
  14. [14] J. GU et al. “Recent advances in convolutional neural networks”. In: Pattern Recognition 77 (2018), pp. 354–377.10.1016/j.patcog.2017.10.013
  15. [15] A. HYVÄRINEN and U. KÖSTER. “Complex cell pooling and the statistics of natural images”. In: Network: Computation in Neural Systems 18.2 (2007), pp. 81–100.
  16. [16] M. A. WANI et al. Advances in deep learning. Springer, 2020.10.1007/978-981-13-6794-6
  17. [17] Y. LECUN et al. “Handwritten digit recognition with a back-propagation network”. In: Advances in neural information processing systems. 1990, pp. 396–404.
  18. [18] Y. LECUN et al. “Gradient-based learning applied to document recognition”. In: Proceedings of the IEEE 86.11 (1998), pp. 2278–2324.10.1109/5.726791
  19. [19] A. KRIZHEVSKY, I. SUTSKEVER, and G. E. HINTON. “Imagenet classification with deep convolutional neural networks”. In: Advances in neural information processing systems 25 (2012), pp. 1097–1105.
  20. [20] K. HE et al. “Deep residual learning for image recognition”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, pp. 770–778.10.1109/CVPR.2016.90
  21. [21] K. SIMONYAN and A. ZISSERMAN. “Very deep convolutional networks for large-scale image recognition”. In: arXiv preprint arXiv:1409.1556 (2014).
  22. [22] C. SZEGEDY et al. “Going deeper with convolutions”. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015, pp. 1–9.10.1109/CVPR.2015.7298594
  23. [23] W. ZHIQIANG and L. JUN. “A review of object detection based on convolutional neural network”. In: 2017 36th Chinese Control Conference (CCC). IEEE. 2017, pp. 11104–11109.10.23919/ChiCC.2017.8029130
  24. [24] A. DHILLON and G. K. VERMA. “Convolutional neural network: a review of models, methodologies and applications to object detection”. In: Progress in Artificial Intelligence 9.2 (2020), pp. 85–112.
  25. [25] T. GUO et al. “Simple convolutional neural network on image classification”. In: 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA). IEEE. 2017, pp. 721–724.10.1109/ICBDA.2017.8078730
  26. [26] F. SULTANA, A. SUFIAN, and P. DUTTA. “Advancements in Image Classification using Convolutional Neural Network”. In: 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN). 2018, pp. 122–129.10.1109/ICRCICN.2018.8718718
  27. [27] L. VAVREK et al. “Deep convolutional neural network for detection of pathological speech”. In: 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI). IEEE. 2021, pp. 000245–000250.10.1109/SAMI50585.2021.9378656
  28. [28] M. HIREŠ et al. “Convolutional neural network ensemble for Parkinson’s disease detection from voice recordings”. In: Computers in biology and medicine (2021), p. 105021.10.1016/j.compbiomed.2021.10502134799077
  29. [29] G. H. NAIR, V. REKHA, and M. SOUMYA Krishnan. “Handwriting Analysis Using Deep Learning Approach for the Detection of Personality Traits”. In: Ubiquitous Intelligent Systems. Springer, 2022, pp. 531–539.10.1007/978-981-16-3675-2_40
  30. [30] M. GAZDA, M. HIREŠ, and P. DROTÁR. “Multiple-fine-tuned convolutional neural networks for Parkinson’s disease diagnosis from offline handwriting”. In: IEEE Transactions on Systems, Man, and Cybernetics: Systems 52.1 (2021), pp. 78–89.
  31. [31] L. ANTONI et al. “A Two-Phase Multilabel ECG Classification Using One-Dimensional Convolutional Neural Network and Modified Labels”. In: Computing in Cardiology 2021 48 (2021), pp. 1–4.10.23919/CinC53138.2021.9662878
  32. [32] M. GAZDA et al. “Self-Supervised Deep Convolutional Neural Network for Chest X-Ray Classification”. In: IEEE Access 9 (2021), pp. 151972–151982.10.1109/ACCESS.2021.3125324
  33. [33] Q. LI et al. “Medical image classification with convolutional neural network”. In: 2014 13th International Conference on Control Automation Robotics Vision (ICARCV). 2014, pp. 844–848.10.1109/ICARCV.2014.7064414
  34. [34] M. GAZDA et al. “Mixup Augmentation for Kidney and Kidney Tumor Segmentation”. In: Kidney and Kidney Tumor Segmentation. Ed. by N. HELLER et al. Cham: Springer International Publishing, 2022, pp. 90–97. ISBN: 978-3-030-98385-7.
  35. [35] S. HOCHREITER and J. SCHMIDHUBER. “Long Short-Term Memory”. In: Neural Computation 9.8 (1997), pp. 1735–1780.
  36. [36] Y. BENGIO, P. SIMARD, and P. FRASCONI. “Learning long-term dependencies with gradient descent is difficult”. In: IEEE Transactions on Neural Networks 5.2 (1994), pp. 157–166.
  37. [37] R. PASCANU, T. MIKOLOV, and Y. BENGIO. “On the Difficulty of Training Recurrent Neural Networks”. In: Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28. ICML’13. Atlanta, GA, USA: JMLR.org, 2013, III–1310–III–1318.
  38. [38] A. GRAVES. Supervised Sequence Labelling with Recurrent Neural Networks. Vol. 385. Studies in Computational Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. ISBN: 978-3-642-24797-2.
  39. [39] I. SUTSKEVER, O. VINYALS, and Q. V. LE. Sequence to Sequence Learning with Neural Networks. 2014.
  40. [40] A. GRAVES, A.-r. MOHAMED, and G. HINTON. “Speech recognition with deep recurrent neural networks”. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. 2013, pp. 6645–6649.10.1109/ICASSP.2013.6638947
  41. [41] A. GRAVES. Generating Sequences With Recurrent Neural Networks. 2014.
  42. [42] A. GUPTA et al. “Generative Recurrent Networks for De Novo Drug Design”. In: Molecular Informatics 37.1-2 (2018), p. 1700111.10.1002/minf.201700111583694329095571
  43. [43] S. KUMAR and D. SUBHA. “Prediction of Depression from EEG Signal Using Long Short Term Memory(LSTM)”. In: 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI). 2019, pp. 1248–1253.10.1109/ICOEI.2019.8862560
  44. [44] J. PENG and Y. WANG. “Medical image segmentation with limited supervision: A review of deep network models”. In: IEEE Access 9 (2021), pp. 36827–36851.
  45. [45] O. RONNEBERGER, P. Fischer, and T. Brox. “UNet: Convolutional Networks for Biomedical Image Segmentation”. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Ed. by N. Navab et al. Cham: Springer International Publishing, 2015, pp. 234–241. ISBN: 978-3-319-24574-4.
  46. [46] M. DROZDZAL et al. “The Importance of Skip Connections in Biomedical Image Segmentation”. In: Deep Learning and Data Labeling for Medical Applications. Ed. by G. Carneiro et al. Cham: Springer International Publishing, 2016, pp. 179–187. ISBN: 978-3-319-46976-8.
  47. [47] G. HUANG et al. “Densely Connected Convolutional Networks”. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017).10.1109/CVPR.2017.243
  48. [48] P. BILIC et al. “The liver tumor segmentation benchmark (lits)”. In: arXiv preprint arXiv:1901.04056 (2019).
  49. [49] X. GUO et al. “Retinal vessel segmentation combined with generative adversarial networks and dense U-net”. In: IEEE Access 8 (2020), pp. 194551–194560.10.1109/ACCESS.2020.3033273
  50. [50] K. C. SIONTIS et al. “Artificial intelligence-enhanced electrocardiography in cardiovascular disease management”. In: Nature Reviews Cardiology (2021). ISSN: 1759-5010.
  51. [51] M. ELGENDI et al. “Revisiting QRS Detection Methodologies for Portable, Wearable, Battery-Operated, and Wireless ECG Systems”. In: PLOS ONE 9.1 (Jan. 2014), pp. 1–18.10.1371/journal.pone.0084018388365424409290
  52. [52] P. WAGNER et al. “PTB-XL, a large publicly available electrocardiography dataset”. In: Scientific Data 7.154 (2020). ISSN: 2052-4463.10.1038/s41597-020-0495-6724807132451379
  53. [53] G. B. MOODY and R. G. Mark. “The impact of the MIT-BIH Arrhythmia Database”. In: IEEE Engineering in Medicine and Biology Magazine 20.3 (2001), pp. 45–50.
  54. [54] S. VIJAYARANGAN et al. “RPnet: A Deep Learning approach for robust R Peak detection in noisy ECG”. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC). 2020, pp. 345–348.10.1109/EMBC44109.2020.917608433017999
  55. [55] A. Y. HANNUN et al. “Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network”. In: Nature Medicine 25 (2019), pp. 65–69. ISSN: 1546-170X.10.1038/s41591-018-0268-3678483930617320
  56. [56] P. RAJPURKAR et al. “Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks”. In: arXiv preprint arXiv:1707.01836 (2017).
  57. [57] M. TURAKHIA et al. “Diagnostic utility of a novel leadless arrhythmia monitoring device”. English (US). In: American Journal of Cardiology 112.4 (Aug. 2013), pp. 520–524. ISSN: 0002-9149.10.1016/j.amjcard.2013.04.01723672988
  58. [58] F. LIU et al. “An open access database for evaluating the algorithms of electrocardiogram rhythm and morphology abnormality detection”. In: Journal of Medical Imaging and Health Informatics 8 (2018), pp. 1368–1373.10.1166/jmihi.2018.2442
  59. [59] E. A. PEREZ ALDAY et al. “Classification of 12-lead ECGs: the PhysioNet/Computing in Cardiology Challenge 2020”. In: Physiological Measurement 41.12 (2020).10.1088/1361-6579/abc960801578933176294
  60. [60] M. A. REYNA et al. “Will Two Do? Varying Dimensions in Electrocardiography: the PhysioNet/Computing in Cardiology Challenge 2021”. In: Computing in Cardiology 2021 48 (2021), pp. 1–4.10.23919/CinC53138.2021.9662687
  61. [61] F. N. IANDOLA et al. “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and¡ 0.5 MB model size”. In: arXiv preprint arXiv:1602.07360 (2016).
  62. [62] A. ESTEVA et al. “Dermatologist-level classification of skin cancer with deep neural networks”. In: nature 542.7639 (2017), pp. 115–118.10.1038/nature21056838223228117445
  63. [63] F. J. D´AZ-PERNAS et al. “A deep learning approach for brain tumor classification and segmentation using a multiscale convolutional neural network”. In: Healthcare. Vol. 9. 2. Multidisciplinary Digital Publishing Institute. 2021, p. 153.10.3390/healthcare9020153791294033540873
  64. [64] P. LAKHANI and B. SUNDARAM. “Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks”. In: Radiology 284.2 (2017), pp. 574–582.
  65. [65] Y. IKENOYAMA et al. “Detecting early gastric cancer: Comparison between the diagnostic ability of convolutional neural networks and endoscopists”. In: Digestive Endoscopy 33.1 (2021), pp. 141–150.10.1111/den.13688781818732282110
  66. [66] L. WU et al. “A deep neural network improves endoscopic detection of early gastric cancer without blind spots”. In: Endoscopy 51.06 (2019), pp. 522–531.10.1055/a-0855-353230861533
  67. [67] L. LI et al. “Attention based glaucoma detection: a large-scale database and CNN model”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, pp. 10571–10580.10.1109/CVPR.2019.01082
  68. [68] O. RONNEBERGER, P. FISCHER, and T. BROX. “U-net: Convolutional networks for biomedical image segmentation”. In: International Conference on Medical image computing and computer-assisted intervention. Springer. 2015, pp. 234–241.10.1007/978-3-319-24574-4_28
  69. [69] F. ISENSEE et al. “nnu-net: Self-adapting framework for u-net-based medical image segmentation”. In: arXiv preprint arXiv:1809.10486 (2018).10.1007/978-3-658-25326-4_7
  70. [70] N. HELLER et al. “The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 challenge”. In: Medical image analysis 67 (2021), p. 101821.10.1016/j.media.2020.101821773420333049579
  71. [71] F. ISENSEE et al. “Brain tumor segmentation and radiomics survival prediction: Contribution to the brats 2017 challenge”. In: International MICCAI Brainlesion Workshop. Springer. 2017, pp. 287–297.10.1007/978-3-319-75238-9_25
  72. [72] A. GRAVES, A.-R. MOHAMED, and G. HINTON. “Speech recognition with deep recurrent neural networks”. In: 2013 IEEE international conference on acoustics, speech and signal processing. Ieee. 2013, pp. 6645–6649.10.1109/ICASSP.2013.6638947
  73. [73] Z. YUE et al. “Automatic CIN grades prediction of sequential cervigram image using LSTM with multi-state CNN features”. In: IEEE journal of biomedical and health informatics 24.3 (2019), pp. 844–854.10.1109/JBHI.2019.292268231199278
  74. [74] Y. JIN et al. “SV-RCNet: workflow recognition from surgical videos using recurrent convolutional network”. In: IEEE transactions on medical imaging 37.5 (2017), pp. 1114–1126.10.1109/TMI.2017.278765729727275
  75. [75] X. SHI et al. “Convolutional LSTM network: A machine learning approach for precipitation nowcasting”. In: Advances in neural information processing systems 28 (2015).
  76. [76] M. F. STOLLENGA et al. “Parallel multidimensional LSTM, with application to fast biomedical volumetric image segmentation”. In: Advances in neural information processing systems 28 (2015), pp. 2998–3006.
  77. [77] A. KONWER et al. “Predicting COVID-19 lung infiltrate progression on chest radiographs using spatio-temporal LSTM based encoder-decoder network”. In: Medical Imaging with Deep Learning. PMLR. 2021, pp. 384–398.
  78. [78] D. ZHANG et al. “A multi-level convolutional LSTM model for the segmentation of left ventricle myocardium in infarcted porcine cine MR images”. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). 2018, pp. 470–473.10.1109/ISBI.2018.8363618
  79. [79] R. GAO et al. “Distanced LSTM: time-distanced gates in long short-term memory models for lung cancer detection”. In: International Workshop on Machine Learning in Medical Imaging. Springer. 2019, pp. 310–318.10.1007/978-3-030-32692-0_36
  80. [80] J. CAI et al. “Improving deep pancreas segmentation in CT and MRI images via recurrent neural contextual learning and direct loss function”. In: arXiv preprint arXiv:1707.04912 (2017).
  81. [81] R. SANTERAMO, S. WITHEY, and G. MONTANA. “Longitudinal detection of radiological abnormalities with time-modulated LSTM”. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, 2018, pp. 326–333.10.1007/978-3-030-00889-5_37
  82. [82] N. BRAMAN, D. BEYMER, and E. DEHGHAN. “Disease detection in weakly annotated volumetric medical images using a convolutional LSTM network”. In: arXiv preprint arXiv:1812.01087 (2018).
  83. [83] D. JIANG et al. “Could graph neural networks learn better molecular representation for drug discovery? A comparison study of descriptor-based and graph-based models”. In: Journal of cheminformatics 13.1 (2021), pp. 1–23.10.1186/s13321-020-00479-8788818933597034
  84. [84] M. M. BRONSTEIN et al. “Geometric deep learning: Grids, groups, graphs, geodesics, and gauges”. In: arXiv preprint arXiv:2104.13478 (2021).
DOI: https://doi.org/10.2478/aei-2022-0010 | Journal eISSN: 1338-3957 | Journal ISSN: 1335-8243
Language: English
Page range: 34 - 44
Submitted on: Apr 20, 2022
Accepted on: Jun 21, 2022
Published on: Aug 12, 2022
Published by: Technical University of Košice
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2022 Máté Hireš, Peter Bugata, Matej Gazda, Dávid J. Hreško, Róbert Kanász, Lukáš Vavrek, Peter Drotár, published by Technical University of Košice
This work is licensed under the Creative Commons Attribution 4.0 License.