Have a personal or library account? Click to login
Gaze Tracking for Hands-Free Human Using Deep Reinforcement Learning Approach Cover

Gaze Tracking for Hands-Free Human Using Deep Reinforcement Learning Approach

Open Access
|Dec 2023

References

  1. World Health Organization. (2011). Global Report. Accessed: Sep. 9, 2016. [Online]. Available: http://www.who.int/disabilities/worldreport/2011/en/.
  2. Wilkinson, K. M., & Hennig, S. (2007). The state of research and practice in augmentative and alternative communication for children with developmental/intellectual disabilities. Developmental Disabilities Research Reviews, 13(1), 58–69.
  3. DeCoste, D. C., & Glennen, S. (1997). The Handbook Augmentative Alternative Communication. San Diego, CA, USA: Singular.
  4. de Sousa Gomide, R., Loja, L. F. B., Lemos, R. P., Flˆores, E. L., Melo, F. R., & Teixeira, R. A. G. (2016). A new concept of assistive virtual keyboards based on a systematic review of text entry optimization techniques. Research in Biomedical Engineering, 32(2), 176–198.
  5. Mele, M. L., & Federici, S. (2012). A psychotechnological review on eye-tracking systems: Towards user experience. Disability and Rehabilitation: Assistive Technology, 7(4), 261–281.
  6. Park, S.-W., Yim, Y.-L., Yi, S.-H., Kim, H.-Y., & Jung, S.-M. (2012). Augmentative and alternative communication training using eye blink switch for locked-in syndrome patient. Annals of Rehabilitation Medicine, 36(2), 268–272.
  7. Cipresso, P., et al. (2011). The combined use of brain-computer interface and eye-tracking technology for cognitive assessment in amyotrophic lateral sclerosis. In Proceedings of the IEEE International Conference on PervasiveHealth (pp. 320–324).
  8. Schalk, G., Brunner, P., Gerhardt, L. A., Bischof, H., & Wolpaw, J. R. (2008). Brain–computer interfaces (BCIs): Detection instead of classification. Journal of Neuroscience Methods, 167(1), 51–62.
  9. Usakli, A. B., & Gurkan, S. (2010). Design of a novel efficient human-computer interface: An electrooculagram based virtual keyboard. IEEE Transactions on Instrumentation and Measurement, 59(8), 2099–2108.
  10. Fu, Y.-F., & Ho, C.-S. (2009). A fast text-based communication system for handicapped aphasiacs. In Proceedings of the IEEE Conference on Information Intelligence and Security (pp. 583–594).
  11. Orhan, U., Hild, K. E., Erdogmus, D., Roark, B., Oken, B., & Fried Oken, M. (2013). RSVP keyboard: An EEG based typing interface. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 1–11).
  12. Biswas, P., & Samanta, D. (2008). Friend: A communication aid for persons with disabilities. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 16(2), 205–209.
  13. Molina, A. J., Rivera, O., & Gómez, I. (2009). Measuring performance of virtual keyboards based on cyclic scanning. In Proceedings of the IEEE International Conference on Autonomic and Autonomous Systems (pp. 174–178).
  14. Ghosh, S., Sarcar, S., & Samanta, D. (2011). Designing an efficient virtual keyboard for text composition in Bengali. In Proceedings of the ACM International Conference on Human-Computer Interaction in India (pp. 84–87).
  15. Bhattacharya, S., & Laha, S. (2013). Bengali text input interface design for mobile devices. Universal Access in the Information Society, 12(4), 441–451.
  16. Samanta, D., Sarcar, S., & Ghosh, S. (2013). An approach to design virtual keyboards for text composition in Indian languages. International Journal of Human-Computer Interaction, 29(8), 516–540.
  17. Sutton, R. S., & Barto, A. G. (1998). Introduction to Reinforcement Learning. 1st ed. MIT Press.
  18. Ward, D. J., & MacKay, D. J. C. (2002). Artificial intelligence: Fast hands-free writing by gaze direction. Nature, 418, 838.
  19. Rough, D., Vertanen, K., & Kristensson, P. O. (2014). An evaluation of dasher with a high-performance language model as a gaze communication method. In Proceedings of the International Working Conference on Advanced Visual Interfaces (pp. 169–176).
  20. Cecotti, H. (2016). A multimodal gaze-controlled virtual keyboard. IEEE Transactions on Human-Machine Systems, 46(4), 601–606.
  21. Sarcar, S., & Panwar, P. (2013). Eyeboard++: An enhanced eye gaze-based text entry system in Hindi. In Proceedings of the ACM International Conference on Computer-Human Interaction (pp. 354–363).
  22. Anson, D., et al. (2006). The effects of word completion and word prediction on typing rates using on-screen keyboards. Assistive Technology, 18(2), 146–154.
  23. Pouplin, S., et al. (2014). Effect of a dynamic keyboard and word prediction systems on text input speed in patients with functional tetraplegia. Journal of Rehabilitation Research & Development, 51(3), 467–480.
  24. Jacob, R. J. K. (1990). What you look at is what you get: Eye movement-based interaction techniques. In Proceedings of the ACM International Conference on Human Factors in Computing Systems (pp. 11–18).
  25. Prabhu, V., & Prasad, G. (2011). Designing a virtual keyboard with multimodal access for people with disabilities. In Proceedings of the IEEE International Conference on Information and Communication Technologies (pp. 1133–1138).
  26. Singh, J. V., & Prasad, G. (2015). Enhancing an eye-tracker based human-computer interface with multi-modal accessibility applied for text entry. International Journal of Computers & Applications, 130(16), 16–22.
  27. Cretual, A., & Chaumette, F. (2001). Application of motion-based visual servoing to target tracking. The International Journal of Robotics Research, 20(2), 169–182.
  28. Gaskett, C., Fletcher, L., Zelinsky, A., et al. (2000). Reinforcement learning for visual servoing of a mobile robot. In Australian Conference on Robotics and Automation.
  29. Bustamante, G., Danes, P., Forgue, T., & Podlubne, A. (2016). Towards information-based feedback control for binaural active localization. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing.
  30. Magassouba, A., Bertin, N., & Chaumette, F. (2018). Aural servo: sensor-based control from robot audition. IEEE Transactions on Robotics, 34(1), 169–186.
  31. Ghadirzadeh, S., Butepage, J., Maki, A., Kragic, D., & Bj orkman, M. (2016). A sensorimotor reinforcement learning framework for physical Human-Robot Interaction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 2682–2688).
  32. Mitsunaga, N., Smith, C., Kanda, T., Ishiguro, H., & Hagita, N. (2006). Robot behavior adaptation for human-robot interaction based on policy gradient reinforcement learning. Journal of Robotic Systems, 23(10), 545–554.
  33. Thomaz, A. L., Hoffman, G., & Breazeal, C. (2006). Reinforcement learning with human teachers: Understanding how people want to teach robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 352–357).
  34. Cruz, F., Parisi, G. I., Twiefel, J., & Wermter, S. (2016). Multi-modal integration of dynamic audiovisual patterns for an interactive reinforcement learning scenario. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 759–766).
  35. Rothbucher, M., Denk, C., & Diepold, K. (2012). Robotic gaze control using reinforcement learning. In Proceedings of the IEEE International Symposium on Haptic Audio-Visual Environments and Games.
  36. Qureshi, A. H., Nakamura, Y., Yoshikawa, Y., & Ishiguro, H. (2016). Robot gains social intelligence through multimodal deep reinforcement learning. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 745–751).
  37. Qureshi, A. H., Nakamura, Y., Yoshikawa, Y., & Ishiguro, H. (2017). Show, attend and interact: Perceivable human-robot social interaction through neural attention Q-network. In Proceedings of the IEEE International Conference on Robotics and Automation.
  38. Vazquez, M., Steinfeld, A., & Hudson, S. E. (2016). Maintaining awareness of the focus of attention of a conversation: A robot-centric reinforcement learning approach. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.
  39. Bennewitz, M., Faber, F., Joho, D., Schreiber, M., & Behnke, S. (2005). Towards a humanoid museum guide robot that interacts with multiple persons. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 418–423).
  40. Ban, Y., Alameda-Pineda, X., Badeig, F., Ba, S., & Horaud, R. (2017). Tracking a varying number of people with a visually-controlled robotic head. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.
  41. Yun, S.-S. (2017). A gaze control of socially interactive robots in multiple-person interaction. Robotica, 35(11), 2122–2138.
Language: English
Page range: 105 - 114
Submitted on: Aug 23, 2021
Accepted on: Sep 17, 2021
Published on: Dec 15, 2023
Published by: Future Sciences For Digital Publishing
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2023 Irfan Ullah, Abid Ali, Shahid Rasool, Abdul Moiz Khan, Iqra Batool, Manahil Javed, Sarara Kalsoom, published by Future Sciences For Digital Publishing
This work is licensed under the Creative Commons Attribution 4.0 License.