[1] A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates, <em>IEEE Transactions on Pattern Analysis Machine Intelligence</em>, vol. 23, no. 3, pp. 257-267, 2001.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/34.910878" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/34.910878</a></dgdoi:pub-id>
[2] R. Souvenir and J. Babbs, “Learning the viewpoint manifold for action recognition, <em>IEEE International Conference on Computer Vision Pattern Recognition (CVPR’08)</em>, pp. 1-7, 2008.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/CVPR.2008.4587552" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/CVPR.2008.4587552</a></dgdoi:pub-id>
[3] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Action as space-time shapes, <em>IEEE International Conference on Computer Vision (ICCV’05)</em>, vol. 2, pp. 1395-1402, 2005.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/ICCV.2005.28" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/ICCV.2005.28</a></dgdoi:pub-id>
[4] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Action as space-time shapes, <em>IEEE Transaction on Pattern Analysis Machine Intelligence</em>, vol. 29, no. 12, pp. 2247-2253, 2007.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/TPAMI.2007.70711" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/TPAMI.2007.70711</a></dgdoi:pub-id>
[5] K. Guo, P. Ishwa, and J. Konrad, “Action recognition from video using feature covariance matrices, <em>IEEE Transaction on Image Processing</em>, vol. 22, no. 6, pp. 2479-2494, 2013.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/TIP.2013.2252622" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/TIP.2013.2252622</a></dgdoi:pub-id>
[6] Y. Chen, Z. Li, X. Guo, Y. Zhao, and A. Cai, “A spatio-temporal interest point detector based on vorticity for action recognition, <em>IEEE International Conference on Multimedia Expo Workshop</em>, pp. 1-6, 2013.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/ICMEW.2013.6618448" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/ICMEW.2013.6618448</a></dgdoi:pub-id>
[7] M. Laptev, C. Marszalek, and B. Schmid, “Learning realistic human actions from movies, <em>IEEE Conference on Computer Vision Pattern Recognition</em>, pp. 1-8, 2008.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/CVPR.2008.4587756" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/CVPR.2008.4587756</a></dgdoi:pub-id>
[8] S. Savarese, A. Delpozo, J. C. Niebles, and L. Fei-fei, “Spatial-temporal correlations for unsupervised action classification, <em>Proceedings</em>, of the IEEE Workshop on Motion Video Computing, pp. 1-8, 2008.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/WMVC.2008.4544068" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/WMVC.2008.4544068</a></dgdoi:pub-id>
[9] M. S. Ryoo and J. K. Aggarwal, “Spatio-temporal relationship match: Video structure comparison for recognition of complex human activities, <em>IEEE 12th International Conference on Computer Vision</em>, pp. 1593-1600, 2009.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/ICCV.2009.5459361" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/ICCV.2009.5459361</a></dgdoi:pub-id>
[10] I. Laptev and T. Lindeberg “Space-time interest points, <em>Proceedings Ninth IEEE International Conference on Computer Vision</em>, pp. 432-439, 2003.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/ICCV.2003.1238378" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/ICCV.2003.1238378</a></dgdoi:pub-id>
[11] A. Klaser, M. Marszalek and C. Schmid, “A spatio-temporal descriptor based on 3D-gradients, <em>Proceedings of British Machine Vision Conference</em>, pp. 995-1004, 2008.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.5244/C.22.99" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.5244/C.22.99</a></dgdoi:pub-id>
[12] G. Willems, T. Tuytelaars, and L. Van Gool, “An efficient dense scale-invariant spatio-temporal interest point detector, <em>ECCV 5303</em>, pp. 650-663, 2008.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1007/978-3-540-88688-4_48" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1007/978-3-540-88688-4_48</a></dgdoi:pub-id>
[14] N. Ballas, L. Yao, C. Pal, and A. Courville, “Delving deeper into convolutional networks for learning video representations, <em>International Conference on Learning Representations 2016</em>,.
[15] L. Wang, Y. Qiao, and X. Tang, “Action recognition with trajectory pooled deep-convolutional descriptors, <em>IEEE Conference on Computer Vision Pattern Recognition</em>, pp. 4305-4314, 2015.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/CVPR.2015.7299059" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/CVPR.2015.7299059</a></dgdoi:pub-id>
[16] L. Sun, K. Jia, D. Yeung, and B. E. Shi, “Human action recognition using factorized spatio-temporal convolutional networks, <em>IEEE International Conference on Computer Vision (ICCV)</em>, pp. 4597-4605, 2015.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/ICCV.2015.522" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/ICCV.2015.522</a></dgdoi:pub-id>
[17] D. K. Vishwakarma and K. Singh, “Human activity recognition based on the spatial distribution of gradients at sub-levels of average energy silhouette images, <em>IEEE Transactions on Cognitive Development Systems</em>, vol. 9, no. 4, pp. 316-327, 2017.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/TCDS.2016.2577044" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/TCDS.2016.2577044</a></dgdoi:pub-id>
[18] D. K. Vishwakarma and R. Kapoor, “Hybrid classifier based human activity recognition using the silhouettes ands cells, <em>Expert Systems with Applications</em>, vol. 42, no. 20, pp. 6957-6965, 2015.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1016/j.eswa.2015.04.039" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1016/j.eswa.2015.04.039</a></dgdoi:pub-id>
[19] D. Wu and L. Shao, “Silhouette analysis-based action recognition via exploiting human poses, <em>IEEE Transactions on Circuits Systems for Video Technology</em>, vol. 23, no. 2, pp. 236-243, 2013.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/TCSVT.2012.2203731" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/TCSVT.2012.2203731</a></dgdoi:pub-id>
[20] D. Weinland, M. Ozuysal, and P. Fua, “Making action recognition robust to occlusions viewpoint changes,” <em>European Conference on Computer Vision</em> (ECCV), pp. 635-648, 2010.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1007/978-3-642-15558-1_46" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1007/978-3-642-15558-1_46</a></dgdoi:pub-id>
[22] A. A. Chaaraoui, P. C. Pérez, and F. Florez-Revuelta, “Silhouette-based human action recognition using sequences of key poses, <em>Pattern Recognition Letters</em>, vol. 34, no. 15, pp. 1799-1807, 2013.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1016/j.patrec.2013.01.021" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1016/j.patrec.2013.01.021</a></dgdoi:pub-id>
[23] G. Goudelis, K. Karpouzis, and S. Kollias, “Exploring trace transform for robust human action recognition, <em>Pattern Recognition</em>, vol. 46, no. 12, pp. 3238-3248, 2013.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1016/j.patcog.2013.06.006" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1016/j.patcog.2013.06.006</a></dgdoi:pub-id>
[24] R. Touati and M. Mignotte, “MDS-based multi-axial dimensionality reduction model for human action recognition, <em>Canadian Conference on Computer Robot Vision</em>, pp. 262-267, 2014.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/CRV.2014.42" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/CRV.2014.42</a></dgdoi:pub-id>
[26] Y. Fu, T. Zhang, and W. Wang, “Sparse coding-based space-time video representation for action recognition, <em>Multimedia Tools Applications</em>, vol. 76, no. 10, pp. 12645-12658, 2017.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1007/s11042-016-3630-9" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1007/s11042-016-3630-9</a></dgdoi:pub-id>
[28] H. Liu, N. Shu, Q. Tang, and W. Zhang, “Computational model based on the neural network of visual cortex for human action recognition, <em>IEEE Transactions on Neural Networks Learning Systems</em>, vol. 29, no. 5, pp. 1427-1440, 2017.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/TNNLS.2017.2669522" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/TNNLS.2017.2669522</a></dgdoi:pub-id>
[29] Y. Shi, Y. Tian, Y. Wang, and T. Huang, “Sequential deep trajectory descriptor for action recognition with threestream CNN, <em>IEEE Transactions on Multimedia</em>, vol. 19, no. 7, pp. 1510-1520, 2017.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/TMM.2017.2666540" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/TMM.2017.2666540</a></dgdoi:pub-id>
[30] 2D Triangular Elements, The University of New Mexico, <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://www.unm.edu/bgreen/ME360/2D%20Triangular%20Elements.pdf">http://www.unm.edu/bgreen/ME360/2D%20Triangular%20Elements.pdf</ext-link>. Accessed 24 February 2010,.
[31] D. K. Jha, T. Kant, and R. K. Singh, “An accurate two dimensional theory for deformation stress analysis of functionally graded thick plates, <em>International Journal of Advanced Structural Engineering</em>, pp. 6-7, 2014.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1007/s40091-014-0062-5" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1007/s40091-014-0062-5</a></dgdoi:pub-id>
[32] J. Dou and J. Li, “Robust human action recognition based on spatiotemporal descriptors motion temporal templates, <em>Optik</em>, vol. 125, no. 7, pp. 1891-1896, 2014.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1016/j.ijleo.2013.10.022" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1016/j.ijleo.2013.10.022</a></dgdoi:pub-id>
[33] Q. Song, W. Hu, and X. Wenfang, “Robust support vector machine for bullet hole image classification, <em>IEEE Transaction on Systems Man Cybernetics</em>,, vol. 32no. pp. 440-448, 2002.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/TSMCC.2002.807277" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/TSMCC.2002.807277</a></dgdoi:pub-id>
[34] S. S. Keerthi C.-J. Lin, “Asymptotic Behaviors of Support Vector Machines with Gaussian Kernel, <em>Neural Computation vol</em>, 15, no, 7,, pp. 1667-1689, 2003.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1162/089976603321891855" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1162/089976603321891855</a></dgdoi:pub-id>
[35] C. Schuldt, I. Laptev, and B. Caputo, “R, <em>ognizing human actions: a local SVM approach</em>, Proceedings of the 17th International Conference on Pattern Recognition Cambridge, UK, 2004,.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/ICPR.2004.1334462" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/ICPR.2004.1334462</a></dgdoi:pub-id>
[36] T. Guha and R. K. Ward, “Learning sparse representations for human action recognition, <em>IEEE Transaction on Pattern Analysis Machine Intelligence</em>, vol. 34, no. 8, pp. 1576-1588, 2012.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/TPAMI.2011.253" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/TPAMI.2011.253</a></dgdoi:pub-id>
[38] S. A. Rahman, I. Song, M. K. H. Leung, I. Lee, and K. Lee, “Fast action recognition using negative space features, <em>Expert Systems Applications</em>, vol. 41, no. 2, pp. 574-587, 2014.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1016/j.eswa.2013.07.082" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1016/j.eswa.2013.07.082</a></dgdoi:pub-id>
[39] I. Gomez-Conde and D. N. Olivieri, “A KPCA spatio-temporal differential geometric trajectory cloud classifier for recognizing human actions in a CBVR system, <em>Expert Systems Applications</em>, vol. 42, no. 13, pp. 5472-5490, 2015.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1016/j.eswa.2015.03.010" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1016/j.eswa.2015.03.010</a></dgdoi:pub-id>
[40] L. Juan and O. Gwun, “A comparison of SIFT, PCA-SIFT and SURF, <em>International Journal of Image Processing</em>, vol. 3, no. 4, pp. 143-152, 2009.
[41] Y. Wang and G. Mori, “Human action recognition using semilatent topic models, <em>IEEE Transactions on Pattern Analysis Machine Intelligence</em>, vol. 31, no. 10, pp. 1762-1764, 2009.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/TPAMI.2009.43" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/TPAMI.2009.43</a></dgdoi:pub-id>
[43] A. Iosifidis A Tefas and I. Pitas, <em>Discriminant bag of words based representation for human action recognition</em>, Pattern Recognition Letters, vol. 49, no. 1, pp. 185-192, 2014.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1016/j.patrec.2014.07.011" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1016/j.patrec.2014.07.011</a></dgdoi:pub-id>
[44] X. Wu, D. Xu, L. Duan, and J. Luo, “Action recognition using context appearance distribution features, <em>IEEE Conference on Computer Vision Pattern Recognition (CVPR)</em>, pp. 489-496, 2011.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/CVPR.2011.5995624" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/CVPR.2011.5995624</a></dgdoi:pub-id>
[46] E.-A, Mosabbeb, K. Raahemifar, and M. Fathy, “Multi-view human activity recognition in distributed camera sensor networks, <em>Sensors</em>, vol. 13, no. 7, pp. 8750-8770, 2013.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.3390/s130708750" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.3390/s130708750</a></dgdoi:pub-id>
[47] J. Wang, H. Zheng, J. Gao, and J. Cen, “Cross-view action recognition based on a statistical translation framework, <em>IEEE Transactions on Circuits Systems for Video Technology</em>, vol. 26, no. 8, pp. 1461-1475, 2016.<dgdoi:pub-id xmlns:dgdoi="http://degruyter.com/resources/doi-from-crossref" pub-id-type="doi"><a href="https://doi.org/10.1109/TCSVT.2014.2382984" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">10.1109/TCSVT.2014.2382984</a></dgdoi:pub-id>