Have a personal or library account? Click to login
A Progressive and Cross-Domain Deep Transfer Learning Framework for Wrist Fracture Detection Cover

A Progressive and Cross-Domain Deep Transfer Learning Framework for Wrist Fracture Detection

Open Access
|Feb 2022

References

  1. [1] W. Cooney, R. Bussey, J. Dobyns, and R. Linscheid, “Difficult wrist fractures. perilunate fracture-dislocations of the wrist.,” Clinical Orthopaedics and Related Research, no. 214, pp. 136–147, 1987.10.1097/00003086-198701000-00020
  2. [2] R. Lindsey, A. Daluiski, S. Chopra, A. Lachapelle, M. Mozer, S. Sicular, et al., “Deep neural network improves fracture detection by clinicians,” Proceedings of the National Academy of Sciences, vol. 115, no. 45, pp. 11591–11596, 2018.
  3. [3] C. M. Court-Brown and B. Caesar, “Epidemiology of adult fractures: A review,” Injury, vol. 37, pp. 691–697, Aug 2006.10.1016/j.injury.2006.04.13016814787
  4. [4] C. A. Goldfarb, Y. Yin, L. A. Gilula, A. J. Fisher, and M. I. Boyer, “Wrist fractures: What the clinician wants to know,” Radiology, vol. 219, no. 1, pp. 11–28, 2001. PMID: 11274530.10.1148/radiology.219.1.r01ap131111274530
  5. [5] H. R. Guly, “Injuries initially misdiagnosed as sprained wrist (beware the sprained wrist),” Emergency Medicine Journal, vol. 19, no. 1, pp. 41–42, 2002.10.1136/emj.19.1.41172578811777870
  6. [6] B. Petinaux, R. Bhat, K. Boniface, and J. Aristizabal, “Accuracy of radiographic readings in the emergency department,” The American Journal of Emergency Medicine, vol. 29, pp. 18–25, Jan 2011.10.1016/j.ajem.2009.07.01120825769
  7. [7] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi, M. Ghafoorian, et al., “A survey on deep learning in medical image analysis,” Medical image analysis, vol. 42, pp. 60–88, 2017.10.1016/j.media.2017.07.00528778026
  8. [8] D. Kim and T. MacKinnon, “Artificial intelligence in fracture detection: Transfer learning from deep convolutional neural networks,” Clinical Radiology, vol. 73, 12 2017.10.1016/j.crad.2017.11.01529269036
  9. [9] J. Olczak, N. Fahlberg, A. Maki, A. Razavian, A. Jilert, A. Stark, et al., “Artificial intelligence for analyzing orthopedic trauma radiographs: Deep learning algorithms—are they on par with humans for diagnosing fractures?,” Acta Orthopaedica, vol. 88, pp. 1–6, 07 2017.10.1080/17453674.2017.1344459569480028681679
  10. [10] R. Lindsey, A. Daluiski, S. Chopra, A. Lachapelle, M. Mozer, S. Sicular, et al., “Deep neural network improves fracture detection by clinicians,” Proceedings of the National Academy of Sciences, vol. 115, no. 45, pp. 11591–11596, 2018.
  11. [11] D. Soekhoe, P. van der Putten, and A. Plaat, “On the impact of data set size in transfer learning using deep neural networks,” Lecture Notes in Computer Science Advances in Intelligent Data Analysis XV, pp. 50–60, 2016.10.1007/978-3-319-46349-0_5
  12. [12] H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., “Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1285–1298, 2016.
  13. [13] B. Q. Huynh, H. Li, and M. L. Giger, “Digital mammographic tumor classification using transfer learning from deep convolutional neural networks,” Journal of Medical Imaging, vol. 3, no. 3, p. 034501, 2016.
  14. [14] A. Van Opbroek, M. A. Ikram, M. W. Vernooij, and M. De Bruijne, “Transfer learning improves supervised image segmentation across imaging protocols,” IEEE transactions on medical imaging, vol. 34, no. 5, pp. 1018–1030, 2014.
  15. [15] V. Christen, A. Groß, and E. Rahm, “Approaches for annotating medical documents.,” in LWDA, pp. 227–232, 2016.
  16. [16] P. Klassen, F. Xia, and M. Yetisgen-Yildiz, “Annotating and detecting medical events in clinical notes,” in Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pp. 3417–3421, 2016.
  17. [17] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255, Ieee, 2009.10.1109/CVPR.2009.5206848
  18. [18] C. Karam, J. El Zini, and M. Awad, “X-ray wrist fracture classification,” 2019.
  19. [19] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015.
  20. [20] P. Lakhani and B. Sundaram, “Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks,” Radiology, vol. 284, no. 2, pp. 574–582, 2017.10.1148/radiol.201716232628436741
  21. [21] M. P. McBee, O. A. Awan, A. T. Colucci, C. W. Ghobadi, N. Kadom, A. P. Kansagra, et al., “Deep learning in radiology,” Academic radiology, vol. 25, no. 11, pp. 1472–1480, 2018.
  22. [22] J. H. Thrall, X. Li, Q. Li, C. Cruz, S. Do, K. Dreyer, et al., “Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success,” Journal of the American College of Radiology, vol. 15, no. 3, pp. 504–508, 2018.10.1016/j.jacr.2017.12.02629402533
  23. [23] M. A. Mazurowski, M. Buda, A. Saha, and M. R. Bashir, “Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on mri,” Journal of Magnetic Resonance Imaging, vol. 49, no. 4, pp. 939–954, 2019.10.1002/jmri.26534648340430575178
  24. [24] A. S. Becker, M. Marcon, S. Ghafoor, M. C. Wurnig, T. Frauenfelder, and A. Boss, “Deep learning in mammography: diagnostic accuracy of a multipurpose image analysis software in the detection of breast cancer,” Investigative radiology, vol. 52, no. 7, pp. 434–440, 2017.10.1097/RLI.000000000000035828212138
  25. [25] J. Wang, X. Yang, H. Cai, W. Tan, C. Jin, and L. Li, “Discrimination of breast cancer with microcalcifications on mammography by deep learning,” Scientific reports, vol. 6, p. 27327, 2016.
  26. [26] D. Ribli, A. Horváth, Z. Unger, P. Pollner, and I. Csabai, “Detecting and classifying lesions in mammograms with deep learning,” Scientific reports, vol. 8, no. 1, p. 4165, 2018.10.1038/s41598-018-22437-z585466829545529
  27. [27] M. Araya-Polo, J. Jennings, A. Adler, and T. Dahlke, “Deep-learning tomography,” The Leading Edge, vol. 37, no. 1, pp. 58–66, 2018.10.1190/tle37010058.1
  28. [28] K.-L. Hua, C.-H. Hsu, S. C. Hidayati, W.-H. Cheng, and Y.-J. Chen, “Computer-aided classification of lung nodules on computed tomography images via deep learning technique,” OncoTargets and therapy, vol. 8, 2015.
  29. [29] T. Würfl, F. C. Ghesu, V. Christlein, and A. Maier, “Deep learning computed tomography,” in International conference on medical image computing and computer-assisted intervention, pp. 432–440, Springer, 2016.10.1007/978-3-319-46726-9_50
  30. [30] H. Zhang, L. Li, K. Qiao, L. Wang, B. Yan, L. Li, et al., “Image prediction for limited-angle tomography via deep learning with convolutional neural network,” arXiv preprint arXiv:1607.08707, 2016.
  31. [31] M. H. Yap, G. Pons, J. Martí, S. Ganau, M. Sentís, R. Zwiggelaar, et al., “Automated breast ultrasound lesions detection using convolutional neural networks,” IEEE journal of biomedical and health informatics, vol. 22, no. 4, pp. 1218–1226, 2017.
  32. [32] K. Lekadir, A. Galimzianova,À. Betriu, M. del Mar Vila, L. Igual, D. L. Rubin, et al., “A convolutional neural network for automatic characterization of plaque composition in carotid ultrasound,” IEEE journal of biomedical and health informatics, vol. 21, no. 1, pp. 48–55, 2016.10.1109/JBHI.2016.2631401529362227893402
  33. [33] P. Burlina, S. Billings, N. Joshi, and J. Albayda, “Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods,” PloS one, vol. 12, no. 8, p. e0184059, 2017.10.1371/journal.pone.0184059557667728854220
  34. [34] P. H. Kalmet, S. Sanduleanu, S. Primakov, G. Wu, A. Jochems, T. Refaee, A. Ibrahim, L. v. Hulst, P. Lambin, and M. Poeze, “Deep learning in fracture detection: a narrative review,” Acta orthopaedica, vol. 91, no. 2, pp. 215–220, 2020.10.1080/17453674.2019.1711323714427231928116
  35. [35] R. M. Jones, A. Sharma, R. Hotchkiss, J. W. Sperling, J. Hamburger, C. Ledig, R. O’Toole, M. Gardner, S. Venkatesh, M. M. Roberts, et al., “Assessment of a deep-learning system for fracture detection in musculoskeletal radiographs,” NPJ digital medicine, vol. 3, no. 1, pp. 1–6, 2020.10.1038/s41746-020-00352-w759920833145440
  36. [36] A. M. Raisuddin, E. Vaattovaara, M. Nevalainen, M. Nikki, E. J¨arvenp¨a¨a, K. Makkonen, P. Pinola, T. Palsio, A. Niemensivu, O. Tervonen, et al., “Critical evaluation of deep neural networks for wrist fracture detection,” Scientific reports, vol. 11, no. 1, pp. 1–11, 2021.10.1038/s41598-021-85570-2797104833727668
  37. [37] B. Guan, G. Zhang, J. Yao, X. Wang, and M. Wang, “Arm fracture detection in x-rays based on improved deep convolutional neural network,” Computers & Electrical Engineering, vol. 81, p. 106530, 2020.
  38. [38] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. knowledge and data engineering, vol. 22, no. 10, pp. 1345–1359, 2010.
  39. [39] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng, “Self-taught learning: transfer learning from unlabeled data,” in Proceedings of the 24th international conference on Machine learning, pp. 759–766, ACM, 2007.10.1145/1273496.1273592
  40. [40] H. Ravishankar, P. Sudhakar, R. Venkataramani, S. Thiruvenkadam, P. Annangi, N. Babu, et al., “Understanding the mechanisms of deep transfer learning for medical images,” in Deep Learning and Data Labeling for Medical Applications, pp. 188–196, Springer, 2016.10.1007/978-3-319-46976-8_20
  41. [41] N. Tajbakhsh, J. Y. Shin, S. R. Gurudu, R. T. Hurst, C. B. Kendall, M. B. Gotway, et al., “Convolutional neural networks for medical image analysis: Full training or fine tuning?,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1299–1312, 2016.
  42. [42] B. J. Erickson, P. Korfiatis, Z. Akkus, and T. L. Kline, “Machine learning for medical imaging,” Radiographics, vol. 37, no. 2, pp. 505–515, 2017.10.1148/rg.2017160130537562128212054
  43. [43] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
  44. [44] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Thirty-First AAAI Conference on Artificial Intelligence, 2017.10.1609/aaai.v31i1.11231
  45. [45] S. Ren, K. He, R. Girshick, and J. Sun, “Faster rcnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, pp. 91–99, 2015.
  46. [46] T. Urakawa, Y. Tanaka, H. Matsuzawa, K. Watanabe, and N. Endo, “Detecting intertrochanteric hip fractures with orthopedist-level accuracy using a deep convolutional neural network,” Journal of the International Skeletal Society A Journal of Radiology, Pathology and Orthopedics, vol. 42, pp. 239–244, 2019.10.1007/s00256-018-3016-329955910
  47. [47] P. Rajpurkar, J. Irvin, A. Bagul, D. Ding, T. Duan, H. Mehta, et al., “Mura dataset: Towards radiologist-level abnormality detection in musculoskeletal radiographs,” 2017. cite arxiv:1712.06957.
  48. [48] K. Gan, D. Xu, Y. Lin, Y. Shen, T. Zhang, K. Hu, et al., “Artificial intelligence detection of distal radius fractures: a comparison between the convolutional neural network and professional assessments,” Acta orthopaedica, pp. 1–12, 2019.10.1080/17453674.2019.1600125671819030942136
  49. [49] J. de Matos, A. de Souza Britto Jr., L. E. S. Oliveira, and A. L. Koerich, “Double transfer learning for breast cancer histopathologic image classification,” CoRR, vol. abs/1904.07834, 2019.
  50. [50] S. Christodoulidis, M. Anthimopoulos, L. Ebner, A. Christe, and S. G. Mougiakakou, “Multi-source transfer learning with convolutional neural networks for lung pattern analysis,” CoRR, vol. abs/1612.02589, 2016.
  51. [51] J. Li, W. Wu, D. Xue, and P. Gao, “Multi-source deep transfer neural network algorithm,” Sensors (Basel, Switzerland), vol. 19, p. 3992, Sep 2019. 31527437[pmid].10.3390/s19183992676784731527437
  52. [52] R. Gupta and L.-A. Ratinov, “Text categorization with knowledge transfer from heterogeneous data sources,” in AAAI, pp. 842–847, 2008.
  53. [53] Z. Yu, Z. Jin, L. Wei, J. Guo, J. Huang, D. Cai, X. He, and X.-S. Hua, “Progressive transfer learning for person re-identification,” Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Aug 2019.10.24963/ijcai.2019/586
  54. [54] W. Hu, Y. Jin, X. Wu, and J. Chen, “Progressive transfer learning for low frequency data prediction in full waveform inversion,” 2019.
  55. [55] Y. Gu, Z. Ge, C. P. Bonnington, and J. Zhou, “Progressive transfer learning and adversarial domain adaptation for cross-domain skin disease classification,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 5, pp. 1379–1393, 2020.
  56. [56] J. Antolík, “Automatic annotation of medical records,” Studies in health technology and informatics, vol. 116, pp. 817–822, 2005.
  57. [57] C. Ganoe, W. Wu, P. Barr, W. Haslett, M. Dannenberg, K. Bonasia, J. Finora, J. Schoonmaker, W. Onsando, J. Ryan, et al., “Natural language processing for automated annotation of medication mentions in primary care visit conversations,” medRxiv, 2021.10.1101/2021.03.29.21254488
  58. [58] H. Li, B. Zhang, Y. Zhang, W. Liu, Y. Mao, J. Huang, and L. Wei, “A semi-automated annotation algorithm based on weakly supervised learning for medical images,” Biocybernetics and Biomedical Engineering, vol. 40, no. 2, pp. 787–802, 2020.10.1016/j.bbe.2020.03.005
  59. [59] R. Bouslimi and J. Akaichi, “New approach for automatic medical image annotation using the bagof-words model,” in 2015 6th IEEE International Conference on Software Engineering and Service Science (ICSESS), pp. 1088–1093, 2015.
  60. [60] T. Gong, S. Li, J. Wang, C. L. Tan, B. Pang, T. Lim, C. Lee, Q. Tian, and Z. Zhang, “Automatic labeling and classification of brain ct images,” pp. 1581–1584, 09 2011.
  61. [61] A. R. Aronson and F.-M. Lang, “An overview of metamap: historical perspective and recent advances,” Journal of the American Medical Informatics Association : JAMIA, vol. 17, no. 3, pp. 229–236, 2010. PMC2995713[pmcid].10.1136/jamia.2009.002733299571320442139
  62. [62] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” CoRR, vol. abs/1311.2901, 2013.
  63. [63] M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” arXiv preprint arXiv:1703.01365, 2017.
  64. [64] M. T. Ribeiro, S. Singh, and C. Guestrin, “Anchors: High-precision model-agnostic explanations,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, 2018.10.1609/aaai.v32i1.11491
  65. [65] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Object detectors emerge in deep scene cnns,” 2015.
  66. [66] B. Zhou, A. Khosla, L. A., A. Oliva, and A. Torralba, “Learning deep features for discriminative localization.,” CVPR, 2016.10.1109/CVPR.2016.319
  67. [67] A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks,” 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Mar 2018.10.1109/WACV.2018.00097
  68. [68] B. N. Patro, M. Lunayach, S. Patel, and V. P. Namboodiri, “U-cam: Visual explanation using uncertainty based class activation maps,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 7444–7453, 2019.
  69. [69] M. T. Ribeiro, S. Singh, and C. Guestrin, “”why should i trust you?”: Explaining the predictions of any classifier,” 2016.10.1145/2939672.2939778
  70. [70] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba, “Network dissection: Quantifying interpretability of deep visual representations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6541–6549, 2017.
  71. [71] R. Fong and A. Vedaldi, “Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8730–8738, 2018.
  72. [72] K. Leino, S. Sen, A. Datta, M. Fredrikson, and L. Li, “Influence-directed explanations for deep convolutional networks,” in 2018 IEEE International Test Conference (ITC), pp. 1–8, IEEE, 2018.10.1109/TEST.2018.8624792
  73. [73] A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5188–5196, 2015.
  74. [74] A. Dosovitskiy and T. Brox, “Inverting visual representations with convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4829–4837, 2016.
  75. [75] S. Bach, A. Binder, G. Montavon, F. Klauschen, K. Müller, and W. Samek, “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PLoS ONE, vol. 10, 2015.10.1371/journal.pone.0130140449875326161953
  76. [76] M. Böhle, F. Eitel, M. Weygandt, and K. Ritter, “Layer-wise relevance propagation for explaining deep neural network decisions in mri-based alzheimer’s disease classification,” Frontiers in aging neuroscience, vol. 11, pp. 194–194, Jul 2019. 31417397[pmid].10.3389/fnagi.2019.00194668508731417397
  77. [77] F. Eitel, E. Soehler, J. Bellmann-Strobl, A. U. Brandt, K. Ruprecht, R. M. Giess, J. Kuchling, S. Asseyer, M. Weygandt, J.-D. Haynes, M. S. l, F. Paul, and K. Ritter, “Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional mri using layer-wise relevance propagation,” NeuroImage: Clinical, vol. 24, p. 102003, 2019.
  78. [78] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, “Canonical correlation analysis: An overview with application to learning methods,” Neural computation, vol. 16, no. 12, pp. 2639–2664, 2004.
  79. [79] D. Sussillo, M. M. Churchland, M. T. Kaufman, and K. V. Shenoy, “A neural network that finds a naturalistic solution for the production of muscle activity,” Nature neuroscience, vol. 18, no. 7, pp. 1025–1033, 2015.
  80. [80] M. Faruqui and C. Dyer, “Improving vector space word representations using multilingual correlation,” in Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pp. 462–471, 2014.10.3115/v1/E14-1049
  81. [81] M. Raghu, J. Gilmer, J. Yosinski, and J. Sohl-Dickstein, “Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability,” in Advances in Neural Information Processing Systems, pp. 6076–6085, 2017.
  82. [82] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 8, pp. 1798–1828, 2013.
  83. [83] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” in Advances in neural information processing systems, pp. 3320–3328, 2014.
  84. [84] C. Castillo, T. Steffens, L. Sim, and L. Caffery, “The effect of clinical information on radiology reporting: A systematic review,” Journal of Medical Radiation Sciences, vol. 68, no. 1, pp. 60–74, 2021.10.1002/jmrs.424789092332870580
  85. [85] Theano Development Team, “Theano: A Python framework for fast computation of mathematical expressions,” arXiv e-prints, vol. abs/1605.02688, May 2016.
  86. [86] G. Huang, Z. Liu, and K. Q. Weinberger, “Densely connected convolutional networks,” CoRR, vol. abs/1608.06993, 2016.
Language: English
Page range: 101 - 120
Submitted on: May 24, 2021
Accepted on: Aug 9, 2021
Published on: Feb 23, 2022
Published by: SAN University
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2022 Christophe Karam, Julia El Zini, Mariette Awad, Charbel Saade, Lena Naffaa, Mohammad El Amine, published by SAN University
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.