[2] M. Elleuch, R. Maalej, and M. Kherallah, “A new design based-SVM of the CNN classifier architecture with dropout for offline Arabic handwritten recognition,” Procedia Computer Science, vol. 80, 2016, pp. 1712–1723. https://doi.org/10.1016/j.procs.2016.05.51210.1016/j.procs.2016.05.512
[8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, iss. 11, 1998, pp. 2278–2324. https://doi.org/10.1109/5.72679110.1109/5.726791
[9] P. Simard, D. Steinkraus, and J. C. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” International Conference on Document Analysis and Recognition (ICDAR), vol. 3, 2003, pp. 958–962. https://doi.org/10.1109/ICDAR.2003.122780110.1109/ICDAR.2003.1227801
[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, iss. 6, 2017, pp. 84–90. https://doi.org/10.1145/306538610.1145/3065386
[12] J. Mutch and D. G. Lowe, “Object class recognition and localization using sparse features with limited receptive fields,” International Journal of Computer Vision, vol. 80, iss. 1, 2008, pp. 45–57. https://doi.org/10.1007/s11263-007-0118-010.1007/s11263-007-0118-0
[13] K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological Cybernetics, vol. 36, iss. 4, 1980, pp. 193–202. https://doi.org/10.1007/BF0034425110.1007/BF00344251
[16] D. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber, “Flexible, high performance convolutional neural networks for image classification,” Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, vol. 2, 2011, pp. 1237–1242.
[19] L. Guo, S. Li, X. Niu, and Y. Dou, “A study on layer connection strategies in stacked convolutional deep belief networks,” Pattern Recognition, 6th Chinese Conference, CCPR 2014, Changsha, China, November 17–19, 2014 (Proceedings, Part I), 2014, pp. 81–90. https://doi.org/10.1007/978-3-662-45646-0_910.1007/978-3-662-45646-0_9
[20] Z. Wang, Z. Deng, and S. Wang, “Accelerating convolutional neural networks with dominant convolutional kernel and knowledge preregression,” Computer Vision–ECCV 2016, 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VIII), 2016, pp. 533–548. https://doi.org/10.1007/978-3-319-46484-8_3210.1007/978-3-319-46484-8_32
[21] Z.-Z. Li, Z.-Y. Zhong, and L.-W. Jin, “Identifying best hyperparameters for deep architectures using random forests,” Learning and Intelligent Optimization, 9th International Conference, LION 9, Lille, France, January 12–15, 2015 (Revised Selected Papers), 2015, pp. 29–42. https://doi.org/10.1007/978-3-319-19084-6_410.1007/978-3-319-19084-6_4
[22] C. Ann Ronao and S.-B. Cho, “Deep convolutional neural networks for human activity recognition with smartphone sensors,” Neural Information Processing, 22nd International Conference, ICONIP 2015, November 9–12, 2015 (Proceedings, Part IV), 2015, pp. 46–53. https://doi.org/10.1007/978-3-319-26561-2_610.1007/978-3-319-26561-2_6
[23] A. Azadeh, M. Saberi, A. Kazem, V. Ebrahimipour, A. Nourmohammadzadeh, and Z. Saberi, “A flexible algorithm for fault diagnosis in a centrifugal pump with corrupted data and noise based on ANN and support vector machine with hyper-parameters optimization,” Applied Soft Computing, vol. 13, iss. 3, 2013, pp. 1478–1485. https://doi.org/10.1016/j.asoc.2012.06.02010.1016/j.asoc.2012.06.020
[27] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” Computer Vision and Pattern Recognition, arXiv:1312.6034v2 [cs.CV], 2014.
[31] J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson, “Understanding neural networks through deep visualization,” Computer Vision and Pattern Recognition, arXiv:1506.06579v1 [cs.CV], 2015.
[32] L. A. Gatys, A. S. Ecker, and M. Bethge, “Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks,” Computer Vision and Pattern Recognition, arXiv:1505.07376v1 [cs.CV], 2015.10.1109/CVPR.2016.265
[33] H. Jégou, M. Douze, C. Schmid, and P. Pérez, “Aggregating local descriptors into a compact image representation,” 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 3304–3311.10.1109/CVPR.2010.5540039
[35] C. Schmid and R. Mohr, “Local grayvalue invariants for image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, iss. 5, 1997, pp. 530–535. https://doi.org/10.1109/34.58921510.1109/34.589215
[37] Y. LeCun, F. J. Huang, and L. Bottou, “Learning methods for generic object recognition with invariance to pose and lighting,” International Conference on Computer Vision and Pattern Recognition, vol. 2, 2004, pp. 97–104. https://doi.org/10.1109/CVPR.2004.131515010.1109/CVPR.2004.1315150
[38] V. V. Romanuke, “Boosting ensembles of heavy two-layer perceptrons for increasing classification accuracy in recognizing shifted-turned-scaled flat images with binary features,” Journal of Information and Organizational Sciences, vol. 39, no. 1, 2015, pp. 75–84.
[39] V. V. Romanuke, “Optimal training parameters and hidden layer neurons number of two-layer perceptron for generalized scaled objects classification problem,” Information Technology and Management Science, vol. 18, 2015, pp. 42–48. https://doi.org/10.1515/itms-2015-000710.1515/itms-2015-0007
[40] V. V. Romanuke, “Two-layer perceptron for classifying flat scaledturned-shifted objects by additional feature distortions in training,” Journal of Uncertain Systems, vol. 9, no. 4, 2015, pp. 286–305.
[41] V. V. Romanuke, “An attempt for 2-layer perceptron high performance in classifying shifted monochrome 60-by-80-images via training with pixel-distorted shifted images on the pattern of 26 alphabet letters,” Radio Electronics, Computer Science, Control, no. 2, 2013, pp. 112–118. https://doi.org/10.15588/1607-3274-2013-2-1810.15588/1607-3274-2013-2-18
[43] V. V. Romanuke, “Training data expansion and boosting of convolutional neural networks for reducing the MNIST dataset error rate,” Research Bulletin of the National Technical University of Ukraine “Kyiv Polytechnic Institute”, no. 6, pp. 29–34, 2016. https://doi.org/10.20535/1810-0546.2016.6.8411510.20535/1810-0546.2016.6.84115
[44] V. V. Romanuke, “Uniform sampling of fundamental simplexes as sets of players’ mixed strategies in the finite noncooperative game for finding equilibrium situations with possible concessions,” Journal of Automation and Information Sciences, vol. 47, iss. 9, 2015, pp. 76–85. https://doi.org/10.1615/JAutomatInfScien.v47.i9.7010.1615/JAutomatInfScien.v47.i9.70
[45] V. V. Romanuke, “Sampling individually fundamental simplexes as sets of players’ mixed strategies in finite noncooperative game for applicable approximate Nash equilibrium situations with possible concessions,” Journal of Information and Organizational Sciences, vol. 40, no. 1, 2016, pp. 105–143.10.31341/jios.40.1.6