Have a personal or library account? Click to login
Human action recognition using descriptor based on selective finite element analysis Cover

Human action recognition using descriptor based on selective finite element analysis

Open Access
|Dec 2019

References

  1. [1] A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates, IEEE Transactions on Pattern Analysis Machine Intelligence, vol. 23, no. 3, pp. 257-267, 2001.10.1109/34.910878
  2. [2] R. Souvenir and J. Babbs, “Learning the viewpoint manifold for action recognition, IEEE International Conference on Computer Vision Pattern Recognition (CVPR’08), pp. 1-7, 2008.10.1109/CVPR.2008.4587552
  3. [3] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Action as space-time shapes, IEEE International Conference on Computer Vision (ICCV’05), vol. 2, pp. 1395-1402, 2005.10.1109/ICCV.2005.28
  4. [4] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri, “Action as space-time shapes, IEEE Transaction on Pattern Analysis Machine Intelligence, vol. 29, no. 12, pp. 2247-2253, 2007.10.1109/TPAMI.2007.70711
  5. [5] K. Guo, P. Ishwa, and J. Konrad, “Action recognition from video using feature covariance matrices, IEEE Transaction on Image Processing, vol. 22, no. 6, pp. 2479-2494, 2013.10.1109/TIP.2013.2252622
  6. [6] Y. Chen, Z. Li, X. Guo, Y. Zhao, and A. Cai, “A spatio-temporal interest point detector based on vorticity for action recognition, IEEE International Conference on Multimedia Expo Workshop, pp. 1-6, 2013.10.1109/ICMEW.2013.6618448
  7. [7] M. Laptev, C. Marszalek, and B. Schmid, “Learning realistic human actions from movies, IEEE Conference on Computer Vision Pattern Recognition, pp. 1-8, 2008.10.1109/CVPR.2008.4587756
  8. [8] S. Savarese, A. Delpozo, J. C. Niebles, and L. Fei-fei, “Spatial-temporal correlations for unsupervised action classification, Proceedings, of the IEEE Workshop on Motion Video Computing, pp. 1-8, 2008.10.1109/WMVC.2008.4544068
  9. [9] M. S. Ryoo and J. K. Aggarwal, “Spatio-temporal relationship match: Video structure comparison for recognition of complex human activities, IEEE 12th International Conference on Computer Vision, pp. 1593-1600, 2009.10.1109/ICCV.2009.5459361
  10. [10] I. Laptev and T. Lindeberg “Space-time interest points, Proceedings Ninth IEEE International Conference on Computer Vision, pp. 432-439, 2003.10.1109/ICCV.2003.1238378
  11. [11] A. Klaser, M. Marszalek and C. Schmid, “A spatio-temporal descriptor based on 3D-gradients, Proceedings of British Machine Vision Conference, pp. 995-1004, 2008.10.5244/C.22.99
  12. [12] G. Willems, T. Tuytelaars, and L. Van Gool, “An efficient dense scale-invariant spatio-temporal interest point detector, ECCV 5303, pp. 650-663, 2008.10.1007/978-3-540-88688-4_48
  13. [13] M. Chen and A. Hauptmann, “MoSIFT: Recognizing human actions in surveillance videos, CMU-CS-09-161 2009,.
  14. [14] N. Ballas, L. Yao, C. Pal, and A. Courville, “Delving deeper into convolutional networks for learning video representations, International Conference on Learning Representations 2016,.
  15. [15] L. Wang, Y. Qiao, and X. Tang, “Action recognition with trajectory pooled deep-convolutional descriptors, IEEE Conference on Computer Vision Pattern Recognition, pp. 4305-4314, 2015.10.1109/CVPR.2015.7299059
  16. [16] L. Sun, K. Jia, D. Yeung, and B. E. Shi, “Human action recognition using factorized spatio-temporal convolutional networks, IEEE International Conference on Computer Vision (ICCV), pp. 4597-4605, 2015.10.1109/ICCV.2015.522
  17. [17] D. K. Vishwakarma and K. Singh, “Human activity recognition based on the spatial distribution of gradients at sub-levels of average energy silhouette images, IEEE Transactions on Cognitive Development Systems, vol. 9, no. 4, pp. 316-327, 2017.10.1109/TCDS.2016.2577044
  18. [18] D. K. Vishwakarma and R. Kapoor, “Hybrid classifier based human activity recognition using the silhouettes ands cells, Expert Systems with Applications, vol. 42, no. 20, pp. 6957-6965, 2015.10.1016/j.eswa.2015.04.039
  19. [19] D. Wu and L. Shao, “Silhouette analysis-based action recognition via exploiting human poses, IEEE Transactions on Circuits Systems for Video Technology, vol. 23, no. 2, pp. 236-243, 2013.10.1109/TCSVT.2012.2203731
  20. [20] D. Weinland, M. Ozuysal, and P. Fua, “Making action recognition robust to occlusions viewpoint changes,” European Conference on Computer Vision (ECCV), pp. 635-648, 2010.10.1007/978-3-642-15558-1_46
  21. [21] B. Saghafi and D. Rajan, “Human action recognition using Pose-based discriminant embedding, Signal Processing: Image Communication, vol. 27, no. 1, pp. 96-111, 2012.10.1016/j.image.2011.05.002
  22. [22] A. A. Chaaraoui, P. C. Pérez, and F. Florez-Revuelta, “Silhouette-based human action recognition using sequences of key poses, Pattern Recognition Letters, vol. 34, no. 15, pp. 1799-1807, 2013.10.1016/j.patrec.2013.01.021
  23. [23] G. Goudelis, K. Karpouzis, and S. Kollias, “Exploring trace transform for robust human action recognition, Pattern Recognition, vol. 46, no. 12, pp. 3238-3248, 2013.10.1016/j.patcog.2013.06.006
  24. [24] R. Touati and M. Mignotte, “MDS-based multi-axial dimensionality reduction model for human action recognition, Canadian Conference on Computer Robot Vision, pp. 262-267, 2014.10.1109/CRV.2014.42
  25. [25] H. Han and X. J. Li, “Human action recognition with sparse geometric features, The Imaging Science Journal, vol. 63, no. 1, pp. 45-53, 2015.10.1179/1743131X14Y.0000000091
  26. [26] Y. Fu, T. Zhang, and W. Wang, “Sparse coding-based space-time video representation for action recognition, Multimedia Tools Applications, vol. 76, no. 10, pp. 12645-12658, 2017.10.1007/s11042-016-3630-9
  27. [27] J. Lei, G. Li, J. Zhang, Q. Guo, and D. Tu, “Continuous action segmentation recognition using hybrid convolutional neural network-hidden Markov model, IET Computer Vision, vol. 10, no. 6, pp. 537-544, 2016.10.1049/iet-cvi.2015.0408
  28. [28] H. Liu, N. Shu, Q. Tang, and W. Zhang, “Computational model based on the neural network of visual cortex for human action recognition, IEEE Transactions on Neural Networks Learning Systems, vol. 29, no. 5, pp. 1427-1440, 2017.10.1109/TNNLS.2017.2669522
  29. [29] Y. Shi, Y. Tian, Y. Wang, and T. Huang, “Sequential deep trajectory descriptor for action recognition with threestream CNN, IEEE Transactions on Multimedia, vol. 19, no. 7, pp. 1510-1520, 2017.10.1109/TMM.2017.2666540
  30. [30] 2D Triangular Elements, The University of New Mexico, http://www.unm.edu/bgreen/ME360/2D%20Triangular%20Elements.pdf. Accessed 24 February 2010,.
  31. [31] D. K. Jha, T. Kant, and R. K. Singh, “An accurate two dimensional theory for deformation stress analysis of functionally graded thick plates, International Journal of Advanced Structural Engineering, pp. 6-7, 2014.10.1007/s40091-014-0062-5
  32. [32] J. Dou and J. Li, “Robust human action recognition based on spatiotemporal descriptors motion temporal templates, Optik, vol. 125, no. 7, pp. 1891-1896, 2014.10.1016/j.ijleo.2013.10.022
  33. [33] Q. Song, W. Hu, and X. Wenfang, “Robust support vector machine for bullet hole image classification, IEEE Transaction on Systems Man Cybernetics,, vol. 32no. pp. 440-448, 2002.10.1109/TSMCC.2002.807277
  34. [34] S. S. Keerthi C.-J. Lin, “Asymptotic Behaviors of Support Vector Machines with Gaussian Kernel, Neural Computation vol, 15, no, 7,, pp. 1667-1689, 2003.10.1162/089976603321891855
  35. [35] C. Schuldt, I. Laptev, and B. Caputo, “R, ognizing human actions: a local SVM approach, Proceedings of the 17th International Conference on Pattern Recognition Cambridge, UK, 2004,.10.1109/ICPR.2004.1334462
  36. [36] T. Guha and R. K. Ward, “Learning sparse representations for human action recognition, IEEE Transaction on Pattern Analysis Machine Intelligence, vol. 34, no. 8, pp. 1576-1588, 2012.10.1109/TPAMI.2011.253
  37. [37] D. Weinland, R. Ronfard, and E. Boyer, “Free viewpoint action recognition using motion history vol. s, Computer Vision Image Understanding, vol. 104, no. 2-3, pp. 249-257, 2006.10.1016/j.cviu.2006.07.013
  38. [38] S. A. Rahman, I. Song, M. K. H. Leung, I. Lee, and K. Lee, “Fast action recognition using negative space features, Expert Systems Applications, vol. 41, no. 2, pp. 574-587, 2014.10.1016/j.eswa.2013.07.082
  39. [39] I. Gomez-Conde and D. N. Olivieri, “A KPCA spatio-temporal differential geometric trajectory cloud classifier for recognizing human actions in a CBVR system, Expert Systems Applications, vol. 42, no. 13, pp. 5472-5490, 2015.10.1016/j.eswa.2015.03.010
  40. [40] L. Juan and O. Gwun, “A comparison of SIFT, PCA-SIFT and SURF, International Journal of Image Processing, vol. 3, no. 4, pp. 143-152, 2009.
  41. [41] Y. Wang and G. Mori, “Human action recognition using semilatent topic models, IEEE Transactions on Pattern Analysis Machine Intelligence, vol. 31, no. 10, pp. 1762-1764, 2009.10.1109/TPAMI.2009.43
  42. [42] L.-M. Xia J.-X. Huang, and L.-Z. Tan, “Human action recognition based on chaotic invariants, Journal of Central University, vol. 20, no. 11, pp. 3171-3179, 2014.10.1007/s11771-013-1841-z
  43. [43] A. Iosifidis A Tefas and I. Pitas, Discriminant bag of words based representation for human action recognition, Pattern Recognition Letters, vol. 49, no. 1, pp. 185-192, 2014.10.1016/j.patrec.2014.07.011
  44. [44] X. Wu, D. Xu, L. Duan, and J. Luo, “Action recognition using context appearance distribution features, IEEE Conference on Computer Vision Pattern Recognition (CVPR), pp. 489-496, 2011.10.1109/CVPR.2011.5995624
  45. [45] D.Weinland, M. Özuysal, and P. Fu, “Making action recognition robust to occlusions viewpoint changes”, European Conference on Computer Vision (ECCV), pp. 635-648, 2010.10.1007/978-3-642-15558-1_46
  46. [46] E.-A, Mosabbeb, K. Raahemifar, and M. Fathy, “Multi-view human activity recognition in distributed camera sensor networks, Sensors, vol. 13, no. 7, pp. 8750-8770, 2013.10.3390/s130708750
  47. [47] J. Wang, H. Zheng, J. Gao, and J. Cen, “Cross-view action recognition based on a statistical translation framework, IEEE Transactions on Circuits Systems for Video Technology, vol. 26, no. 8, pp. 1461-1475, 2016.10.1109/TCSVT.2014.2382984
DOI: https://doi.org/10.2478/jee-2019-0077 | Journal eISSN: 1339-309X | Journal ISSN: 1335-3632
Language: English
Page range: 443 - 453
Submitted on: Oct 18, 2019
|
Published on: Dec 31, 2019
In partnership with: Paradigm Publishing Services
Publication frequency: 6 issues per year

© 2019 Rajiv Kapoor, Om Mishra, Madan Mohan Tripathi, published by Slovak University of Technology in Bratislava
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.