Have a personal or library account? Click to login

Automatic Classification of Unexploded Ordnance (UXO) Based on Deep Learning Neural Networks (DLNNS)

Open Access
|Mar 2024

References

  1. D. Ciresan, U. Meier, J. Masci, and J. Schmidhuber, „Multi-column deep neural network for traffic sign classification. Neural Networks”, in The International Joint Conference on Neural Network, IDSIA-USI-SUPSI| Galleria, 2012, doi:10.1016/j.neunet.2012.02.023.
  2. Y. Zhao, M. Qi, X. Li, Y. Meng, Y. Yu, and Y. Dong, „P-LPN: Towards real time pedestrian location perception in complex driving scenes”, IEEE Access, t. 8, s. 54730–54740, 2020, doi:10.1109/ACCESS.2020.2981821.
  3. E. Byvatov, U. Fechner, J. Sadowski, and G. Schneider, „Comparison of support vector machine and artificial neural network systems for drug/nondrug classification”, J. Chem. Inf. Comput. Sci., t. 43, nr 6, s. 1882–1889, 2003, doi:10.1021/ci0341161.
  4. S. Lu, Z. Lu, and Y. Zhang, „Pathological brain detection based on AlexNet and transfer learning”, J. Comput. Sci., t. 30, s. 41–47, 2019, doi:10.1016/j.jocs.2018.11.008.
  5. Ø. Midtgaard; R.E. Hansen; P.E. Hagen; and N. Størkersen. “Imaging sensors for autonomous underwater vehicles in military operations”. Proc.SET-169 Military Sensors Symposium. Friedrichshafen, Germany, May 2011.
  6. HELCOM CHEMU, “Report to the 16th Meeting of Helsinki Commission 8-11 March 1994 from the Ad Hoc Working Group on Dumped Chemical Munition”, Danish Environ. Protec. Agency, 1994.
  7. J. Fabisiak and A. Olejnik, „Amunicja chemiczna zatopiona w morzu bałtyckim - poszukiwania i ocena ryzyka-projekt badawczy CHEMSEA (Chemical munitions dumped in the Baltic Sea - search and risk assessment-CHEMSEA research project)”, Pol. Hyperb. Res., s. 25–52, 2012.
  8. “Sea mines Ukraine waters Russia war Black Sea,” The Guardian, 2022. [Online]. Available: www.theguardian.com/world/2022/jul/11/sea-mines-ukraine-waters-russia-war-black-sea. [Accessed: June 21, 2023].
  9. “Pretrained Convolutional Neural Networks,” MathWorks, 2023. [Online]. Available: https://uk.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html. [Accessed: June 21, 2023].
  10. M. Chodnicki; P. Krogulec; M. Żokowski; and N. Sigiel. „Procedures concerning preparations of autonomous underwater systems to operation focused on detection, classification and identification of mine like objects and ammunition”, J. KONBiN, t. 48, nr 1, s. 149–168, 2018, DOI: 10.2478/jok-2018-0051.
  11. Dowództwo Marynarki Wojennej, „Album Min Morskich” (Naval Command, „Sea Mines Album). Gdynia, Polska: MAR. WOJ., Sep 1947.
  12. “Image Colorization Using Generative Adversarial Networks,” Pinterest, 2023. [Online]. Available: https://www.pinterest.co.uk/pin/145944844154595254/. [Accessed: Sep. 14, 2023].
  13. “SNMCMG1 Photos,” Facebook, 2023. [Online]. Available: https://www.facebook.com/snmcmg1/photos/a.464547430274739/2304079142988216/. [Accessed: Oct. 9, 2023].
  14. “Pretrained Convolutional Neural Networks,” MathWorks, 2023. [Online]. Available: https://www.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html?searchHighlight=pretrained%20neural%20networks&s_tid=srchtitle_support_results_1_pretrained%2520neural%2520networks. [Accessed: Sep. 14, 2023].
  15. P. Szymak, P. Piskur, and K. Naus, „The effectiveness of using a pretrained deep learning neural networks for object classification in underwater video”, Remote Sens., t. 12, nr 18, s. 3020, 2020, DOI:10.3390/rs12183020.
  16. “NATO forces clear mines from the Baltic in Open Spirit operation,” NATO, 2021. [Online]. Available: https://mc.nato.int/media-centre/news/2021/nato-forces-clear-mines-from-the-baltic-in-open-spirit-operation. [Accessed: Sep. 14, 2023].
  17. F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, „SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size”, ArXiv Prepr. ArXiv160207360, 2016.
  18. Z. Cui, C. Tang, Z. Cao, and N. Liu, „D-ATR for SAR images based on deep neural networks”, Remote Sens., t. 11, nr 8, s. 906, 2019, doi:10.3390/rs11080906.
  19. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, „Rethinking the inception architecture for computer vision”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, s. 2818–2826, doi:10.1109/CVPR.2016.308.
  20. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, „Densely connected convolutional networks”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, s. 4700–4708.
  21. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, „Mobilenetv2: Inverted residuals and linear bottlenecks”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, s. 4510–4520, doi:10.1109/CVPR.2018.00474.
  22. K. He, X. Zhang, S. Ren, and J. Sun, „Deep residual learning for image recognition”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, s. 770–778, doi:10.1109/CVPR.2016.90.
  23. F. Chollet, „Xception: Deep learning with depthwise separable convolutions”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, s. 1251–1258, doi:10.1109/CVPR.2017.195.
  24. K. Nazeri, E. Ng, and M. Ebrahimi, „Image colorization using generative adversarial networks”, in Articulated Motion and Deformable Objects: 10th International Conference, AMDO 2018, Palma de Mallorca, Spain, July 12-13, 2018, Proceedings 10, Springer, 2018, s. 85–94, doi: 10.1007/978-3-319-94544-6_9.
  25. X. Zhang, X. Zhou, M. Lin, and J. Sun, „Shufflenet: An extremely efficient convolutional neural network for mobile devices”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, s. 6848–6856, doi:10.1109/CVPR.2018.00716.
  26. B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, „Learning transferable architectures for scalable image recognition”, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, s. 8697–8710, doi:10.1109/CVPR.2018.00907.
  27. O. Russakovsky; J. Deng; H. Su; J. Krause; S. Satheesh; S. Ma; Z. Huang; A. Karpathy; A. Khosla; M. Bernstein et al. „Imagenet large scale visual recognition challenge”. Int. J. Comput. Vis., t. 115, s. 211–252, 2015, doi:10.1007/s11263-015-0816-y.
  28. K. Simonyan and A. Zisserman, „Very deep convolutional networks for large-scale image recognition”, ArXiv Prepr. ArXiv14091556, 2014.
  29. W. Wu, L. Guo, H. Gao, Z. You, Y. Liu, and Z. Chen, „YOLO-SLAM: A semantic SLAM system towards dynamic environment with geometric constraint”, Neural Comput. Appl., s. 1–16, 2022, doi:10.1007/s00521-021-06764-3.
  30. Ü. Atila, M. Uçar, K. Akyol, and E. Uçar, „Plant leaf disease classification using EfficientNet deep learning model”, Ecol. Inform., t. 61, s. 101182, 2021, doi:10.1016/j.ecoinf.2020.101182.
DOI: https://doi.org/10.2478/pomr-2024-0008 | Journal eISSN: 2083-7429 | Journal ISSN: 1233-2585
Language: English
Page range: 77 - 84
Published on: Mar 29, 2024
Published by: Gdansk University of Technology
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2024 Norbert Sigiel, Marcin Chodnicki, Paweł Socik, Rafał Kot, published by Gdansk University of Technology
This work is licensed under the Creative Commons Attribution 4.0 License.