Wilkinson, K. M., & Hennig, S. (2007). The state of research and practice in augmentative and alternative communication for children with developmental/intellectual disabilities. Developmental Disabilities Research Reviews, 13(1), 58–69.
de Sousa Gomide, R., Loja, L. F. B., Lemos, R. P., Flˆores, E. L., Melo, F. R., & Teixeira, R. A. G. (2016). A new concept of assistive virtual keyboards based on a systematic review of text entry optimization techniques. Research in Biomedical Engineering, 32(2), 176–198.
Mele, M. L., & Federici, S. (2012). A psychotechnological review on eye-tracking systems: Towards user experience. Disability and Rehabilitation: Assistive Technology, 7(4), 261–281.
Park, S.-W., Yim, Y.-L., Yi, S.-H., Kim, H.-Y., & Jung, S.-M. (2012). Augmentative and alternative communication training using eye blink switch for locked-in syndrome patient. Annals of Rehabilitation Medicine, 36(2), 268–272.
Cipresso, P., et al. (2011). The combined use of brain-computer interface and eye-tracking technology for cognitive assessment in amyotrophic lateral sclerosis. In Proceedings of the IEEE International Conference on PervasiveHealth (pp. 320–324).
Schalk, G., Brunner, P., Gerhardt, L. A., Bischof, H., & Wolpaw, J. R. (2008). Brain–computer interfaces (BCIs): Detection instead of classification. Journal of Neuroscience Methods, 167(1), 51–62.
Usakli, A. B., & Gurkan, S. (2010). Design of a novel efficient human-computer interface: An electrooculagram based virtual keyboard. IEEE Transactions on Instrumentation and Measurement, 59(8), 2099–2108.
Fu, Y.-F., & Ho, C.-S. (2009). A fast text-based communication system for handicapped aphasiacs. In Proceedings of the IEEE Conference on Information Intelligence and Security (pp. 583–594).
Orhan, U., Hild, K. E., Erdogmus, D., Roark, B., Oken, B., & Fried Oken, M. (2013). RSVP keyboard: An EEG based typing interface. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 1–11).
Biswas, P., & Samanta, D. (2008). Friend: A communication aid for persons with disabilities. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 16(2), 205–209.
Molina, A. J., Rivera, O., & Gómez, I. (2009). Measuring performance of virtual keyboards based on cyclic scanning. In Proceedings of the IEEE International Conference on Autonomic and Autonomous Systems (pp. 174–178).
Ghosh, S., Sarcar, S., & Samanta, D. (2011). Designing an efficient virtual keyboard for text composition in Bengali. In Proceedings of the ACM International Conference on Human-Computer Interaction in India (pp. 84–87).
Bhattacharya, S., & Laha, S. (2013). Bengali text input interface design for mobile devices. Universal Access in the Information Society, 12(4), 441–451.
Samanta, D., Sarcar, S., & Ghosh, S. (2013). An approach to design virtual keyboards for text composition in Indian languages. International Journal of Human-Computer Interaction, 29(8), 516–540.
Rough, D., Vertanen, K., & Kristensson, P. O. (2014). An evaluation of dasher with a high-performance language model as a gaze communication method. In Proceedings of the International Working Conference on Advanced Visual Interfaces (pp. 169–176).
Sarcar, S., & Panwar, P. (2013). Eyeboard++: An enhanced eye gaze-based text entry system in Hindi. In Proceedings of the ACM International Conference on Computer-Human Interaction (pp. 354–363).
Anson, D., et al. (2006). The effects of word completion and word prediction on typing rates using on-screen keyboards. Assistive Technology, 18(2), 146–154.
Pouplin, S., et al. (2014). Effect of a dynamic keyboard and word prediction systems on text input speed in patients with functional tetraplegia. Journal of Rehabilitation Research & Development, 51(3), 467–480.
Jacob, R. J. K. (1990). What you look at is what you get: Eye movement-based interaction techniques. In Proceedings of the ACM International Conference on Human Factors in Computing Systems (pp. 11–18).
Prabhu, V., & Prasad, G. (2011). Designing a virtual keyboard with multimodal access for people with disabilities. In Proceedings of the IEEE International Conference on Information and Communication Technologies (pp. 1133–1138).
Singh, J. V., & Prasad, G. (2015). Enhancing an eye-tracker based human-computer interface with multi-modal accessibility applied for text entry. International Journal of Computers & Applications, 130(16), 16–22.
Cretual, A., & Chaumette, F. (2001). Application of motion-based visual servoing to target tracking. The International Journal of Robotics Research, 20(2), 169–182.
Gaskett, C., Fletcher, L., Zelinsky, A., et al. (2000). Reinforcement learning for visual servoing of a mobile robot. In Australian Conference on Robotics and Automation.
Bustamante, G., Danes, P., Forgue, T., & Podlubne, A. (2016). Towards information-based feedback control for binaural active localization. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing.
Magassouba, A., Bertin, N., & Chaumette, F. (2018). Aural servo: sensor-based control from robot audition. IEEE Transactions on Robotics, 34(1), 169–186.
Ghadirzadeh, S., Butepage, J., Maki, A., Kragic, D., & Bj orkman, M. (2016). A sensorimotor reinforcement learning framework for physical Human-Robot Interaction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 2682–2688).
Thomaz, A. L., Hoffman, G., & Breazeal, C. (2006). Reinforcement learning with human teachers: Understanding how people want to teach robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 352–357).
Cruz, F., Parisi, G. I., Twiefel, J., & Wermter, S. (2016). Multi-modal integration of dynamic audiovisual patterns for an interactive reinforcement learning scenario. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 759–766).
Rothbucher, M., Denk, C., & Diepold, K. (2012). Robotic gaze control using reinforcement learning. In Proceedings of the IEEE International Symposium on Haptic Audio-Visual Environments and Games.
Qureshi, A. H., Nakamura, Y., Yoshikawa, Y., & Ishiguro, H. (2016). Robot gains social intelligence through multimodal deep reinforcement learning. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 745–751).
Qureshi, A. H., Nakamura, Y., Yoshikawa, Y., & Ishiguro, H. (2017). Show, attend and interact: Perceivable human-robot social interaction through neural attention Q-network. In Proceedings of the IEEE International Conference on Robotics and Automation.
Vazquez, M., Steinfeld, A., & Hudson, S. E. (2016). Maintaining awareness of the focus of attention of a conversation: A robot-centric reinforcement learning approach. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.
Bennewitz, M., Faber, F., Joho, D., Schreiber, M., & Behnke, S. (2005). Towards a humanoid museum guide robot that interacts with multiple persons. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 418–423).
Ban, Y., Alameda-Pineda, X., Badeig, F., Ba, S., & Horaud, R. (2017). Tracking a varying number of people with a visually-controlled robotic head. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.