Have a personal or library account? Click to login
Human-Computer Interaction: Overview on State of the Art Cover

References

  1. [1]D. Te’eni, J. Carey and P. Zhang, Human Computer Interaction: Developing Effective Organizational Information Systems, John Wiley & Sons, Hoboken (2007).
  2. [2]B. Shneiderman and C. Plaisant, Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th edition), Pearson/Addison-Wesley, Boston (2004).
  3. [3]J. Nielsen, Usability Engineering, Morgan Kaufman, San Francisco (1994).
  4. [4]D. Te’eni, “Designs that fit: an overview of fit conceptualization in HCI”, in P. Zhang and D. Galletta (eds), Human-Computer Interaction and Management Information Systems: Foundations, M.E. Sharpe, Armonk (2006).
  5. [5]A. Chapanis, Man Machine Engineering, Wadsworth, Belmont (1965).
  6. [6]D. Norman, “Cognitive Engineering”, in D. Norman and S. Draper (eds), User Centered Design: New Perspective on Human-Computer Interaction, Lawrence Erlbaum, Hillsdale (1986).
  7. [7]R.W. Picard, Affective Computing, MIT Press, Cambridge (1997).10.1037/e526112012-054
  8. [8]J.S. Greenstein, “Pointing devices”, in M.G. Helander, T.K. Landauer and P. Prabhu (eds), Handbook of Human-Computer Interaction, Elsevier Science, Amsterdam (1997).
  9. [9]B.A. Myers, “A brief history of human-computer interaction technology”, ACM interactions, 5(2), pp 44-54 (1998).10.1145/274430.274436
  10. [10]B. Shneiderman, Designing the User Interface: Strategies for Effective HumanComputer Interaction (3rd edition), Addison Wesley Longman, Reading (1998).
  11. [11]A. Murata, “An experimental evaluation of mouse, joystick, joycard, lightpen, trackball and touchscreen for Pointing - Basic Study on Human Interface Design”, Proceedings of the Fourth International Conference on Human-Computer Interaction 1991, pp 123-127 (1991).
  12. [12]L.R. Rabiner, Fundamentals of Speech Recognition, Prentice Hall, Englewood Cliffs (1993).
  13. [13]C.M. Karat, J. Vergo and D. Nahamoo, “Conversational interface technologies”, in J.A. Jacko and A. Sears (eds), The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Application, Lawrence Erlbaum Associates, Mahwah (2003).
  14. [14]S. Brewster, “Non speech auditory output”, in J.A. Jacko and A. Sears (eds), The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Application, Lawrence Erlbaum Associates, Mahwah (2003).
  15. [15]G. Robles-De-La-Torre, “The Importance of the sense of touch in virtual and real environments”, IEEE Multimedia 13(3), Special issue on Haptic User Interfaces for Multimedia Systems, pp 24-30 (2006).10.1109/MMUL.2006.69
  16. [16]V. Hayward, O.R. Astley, M. Cruz-Hernandez, D. Grant and G. Robles-De-La-Torre, “Haptic interfaces and devices”, Sensor Review 24(1), pp 16-29 (2004).10.1108/02602280410515770
  17. [17]J. Vince, Introduction to Virtual Reality, Springer, London (2004).10.1007/978-0-85729-386-2
  18. [18]H. Iwata, “Haptic interfaces”, in J.A. Jacko and A. Sears (eds), The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Application, Lawrence Erlbaum Associates, Mahwah (2003).
  19. [19]W. Barfield and T. Caudell, Fundamentals of Wearable Computers and Augmented Reality, Lawrence Erlbaum Associates, Mahwah (2001).10.1201/9780585383590
  20. [20]M.D. Yacoub, Wireless Technology: Protocols, Standards, and Techniques, CRC Press, London (2002).
  21. [21]K. McMenemy and S. Ferguson, A Hitchhiker’s Guide to Virtual Reality, A K Peters, Wellesley (2007).10.1201/b10677
  22. [22]Global Positioning System, “Home page”, http://www.gps.gov/, visited on 10/10/2007.
  23. [23]S.G. Burnay, T.L. Williams and C.H. Jones, Applications of Thermal Imaging, A. Hilger, Bristol (1988).
  24. [24]J. Y. Chai, P. Hong and M. X. Zhou, “A probabilistic approach to reference resolution in multimodal user interfaces”, Proceedings of the 9th International Conference on Intelligent User Interfaces, Funchal, Madeira, Portugal, pp 70-77 (2004).10.1145/964442.964457
  25. [25]E.A. Bretz, “When work is fun and games”, IEEE Spectrum, 39(12), pp 50-50 (2002).10.1109/MSPEC.2002.1088457
  26. [26]ExtremeTech, “Canesta says “Virtual Keyboard” is reality”, http://www.extremetech.com/article2/0,1558,539778,00.asp, visited on 15/10/2007.
  27. [27]G. Riva, F. Vatalaro, F. Davide and M. Alaniz, Ambient Intelligence: The Evolution of Technology, Communication and Cognition towards the Future of HCI, IOS Press, Fairfax (2005).
  28. [28]M.T. Maybury and W. Wahlster, Readings in Intelligent User Interfaces, Morgan Kaufmann Press, San Francisco (1998).10.1145/291080.291081
  29. [29]A. Kirlik, Adaptive Perspectives on Human-Technology Interaction, Oxford University Press, Oxford (2006).
  30. [30]S.L. Oviatt, P. Cohen, L. Wu, J. Vergo, L. Duncan, B. Suhm, J. Bers, T. Holzman, T. Winograd, J. Landay, J. Larson and D. Ferro, “Designing the user interface for multimodal speech and pen-based gesture applications: state-of-the-art systems and future research directions”, Human-Computer Interaction, 15, pp 263-322 (2000).
  31. [31]D.M. Gavrila, “The visual analysis of human movement: a survey”, Computer Vision and Image Understanding, 73(1), pp 82-98 (1999).
  32. [32]L.E. Sibert and R.J.K. Jacob, “Evaluation of eye gaze interaction”, Conference of Human-Factors in Computing Systems, pp 281-288 (2000).10.1145/332040.332445
  33. [33]Various Authors, “Adaptive, intelligent and emotional user interfaces”, Part II of HCI Intelligent Multimodal Interaction Environments, 12th International Conference, HCI International 2007 (Proceedings Part III), Springer Berlin, Heidelberg (2007).
  34. [34]M.N. Huhns and M.P. Singh (eds), Readings in Agents, Morgan Kaufmann, San Francisco (1998).
  35. [35]C.S. Wasson, System Analysis, Design, and Development: Concepts, Principles, and Practices, John Wiley & Sons, Hoboken (2006).
  36. [36]A. Jaimes and N. Sebe, “Multimodal human computer interaction: a survey”, Computer Vision and Image Understanding, 108(1-2), pp 116-134 (2007).
  37. [37]I. Cohen, N. Sebe, A. Garg, L. Chen and T.S. Huang, “Facial expression recognition from video sequences: temporal and static modeling”, Computer Vision and Image Understanding, 91(1-2), pp 160-187 (2003).
  38. [38]B. Fasel and J. Luettin, “Automatic facial expression analysis: a survey”, Pattern Recognition, 36, pp 259-275 (2003).
  39. [39]M. Pantic and L.J.M. Rothkrantz, “Automatic analysis of facial expressions: the state of the art”, IEEE Transactions on PAMI, 22(12), pp 1424-1445 (2000).
  40. [40]J.K. Aggarwal and Q. Cai, “Human motion analysis: a review”, Computer Vision and Image Understanding, 73(3), pp 428-440 (1999).
  41. [41]S. Kettebekov and R. Sharma, “Understanding gestures in multimodal human computer interaction”, International Journal on Artificial Intelligence Tools, 9(2), pp 205-223 (2000).10.1142/S021821300000015X
  42. [42]Y. Wu and T. Huang., “Vision-based gesture recognition: a review”, in A. Braffort, R. Gherbi, S. Gibet, J. Richardson and D. Teil (eds), Gesture-Based Communication in Human-Computer Interaction, volume 1739 of Lecture Notes in Artificial Intelligence, Springer-Verlag, Berlin/Heidelberg (1999).
  43. [43]T. Kirishima, K. Sato and K. Chihara, “Real-time gesture recognition by learning and selective control of visual interest points”, IEEE Transactions on PAMI, 27(3), pp 351364 (2005).10.1109/TPAMI.2005.61
  44. [44]R. Ruddaraju, A. Haro, K. Nagel, Q. Tran, I. Essa, G. Abowd and E. Mynatt, “Perceptual user interfaces using vision-based eye tracking”, Proceedings of the 5th International Conference on Multimodal Interfaces, Vancouver, pp 227-233 (2003).10.1145/958432.958475
  45. [45]A.T. Duchowski, “A breadth-first survey of eye tracking applications”, Behavior Research Methods, Instruments, and Computers, 34(4), pp 455-470 (2002).10.3758/BF03195475
  46. [46]P. Rubin, E. Vatikiotis-Bateson and C. Benoit (eds.), “Special issue on audio-visual speech processing”, Speech Communication, 26, pp 1-2 (1998).10.1016/S0167-6393(98)00046-6
  47. [47]J.P. Campbell Jr., “Speaker recognition: a tutorial”, Proceedings of IEEE, 85(9), pp 1437-1462 (1997).
  48. [48]P.Y. Oudeyer, “The production and recognition of emotions in speech: features and algorithms”, International Journal of Human-Computer Studies, 59(1-2), pp 157-183 (2003).
  49. [49]L.S. Chen, Joint Processing of Audio-Visual Information for the Recognition of Emotional Expressions in Human-Computer Interaction, PhD thesis, UIUC, (2000).
  50. [50]M. Schröder, D. Heylen and I. Poggi, “Perception of non-verbal emotional listener feedback”, Proceedings of Speech Prosody 2006, Dresden, Germany, pp 43-46 (2006).
  51. [51]M.J. Lyons, M. Haehnel and N. Tetsutani, “Designing, playing, and performing, with a vision-based mouth interface”, Proceedings of the 2003 Conference on New Interfaces for Nusical Expression, Montreal, pp 116-121 (2003).
  52. [52]D. Göger, K. Weiss, C. Burghart and H. Wörn, “Sensitive skin for a humanoid robot”, Human-Centered Robotic Systems (HCRS’06), Munich, (2006).
  53. [53]O. Khatib, O. Brock, K.S. Chang, D. Ruspini, L. Sentis and S. Viji, “Human-centered robotics and interactive haptic simulation”, International Journal of Robotics Research, 23(2), pp 167-178 (2004).10.1177/0278364904041325
  54. [54]C. Burghart, O. Schorr, S. Yigit, N. Hata, K. Chinzei, A. Timofeev, R. Kikinis, H. Wörn and U. Rembold, “A multi-agent system architecture for man-machine interaction in computer aided surgery”, Proceedings of the 16th IAR Annual Meeting, Strasburg, pp 117-123 (2001).
  55. [55]A. Legin, A. Rudnitskaya, B. Seleznev and Yu. Vlasov, “Electronic tongue for quality assessment of ethanol, vodka and eau-de-vie”, Analytica Chimica Acta, 534, pp 129-135 (2005).10.1016/j.aca.2004.11.027
  56. [56]S. Oviatt, “Multimodal interfaces”, in J.A. Jacko and A. Sears (eds), The HumanComputer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Application, Lawrence Erlbaum Associates, Mahwah (2003).
  57. [57]R.A. Bolt, “Put-that-there: voice and gesture at the graphics interface”, Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, Seattle, Washington, United States, pp 262-270 (1980).
  58. [58]M. Johnston and S. Bangalore, “MATCHKiosk: a multimodal interactive city guide”, Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions, Barcelona, Spain, Article No. 33, (2004).
  59. [59]I. McCowan, D. Gatica-Perez, S. Bengio, G. Lathoud, M. Barnard and D. Zhang, “Automatic analysis of multimodal group actions in meetings”, IEEE Transactions on PAMI, 27(3), pp 305-317 (2005).10.1109/TPAMI.2005.4915747787
  60. [60]S. Meyer and A. Rakotonirainy, “A Survey of research on context-aware homes”, Australasian Information Security Workshop Conference on ACSW Frontiers, pp 159-168 (2003).
  61. [61]P. Smith, M. Shah and N.D.V. Lobo, “Determining driver visual attention with one camera”, IEEE Transactions on Intelligent Transportation Systems, 4(4), pp 205-218 (2003).10.1109/TITS.2003.821342
  62. [62]K. Salen and E. Zimmerman, Rules of Play: Game Design Fundamentals, MIT Press, Cambridge (2003).
  63. [63]Y. Arafa and A. Mamdani, “Building multi-modal personal sales agents as interfaces to E-commerce applications”, Proceedings of the 6th International Computer Science Conference on Active Media Technology, pp 113-133 (2001).10.1007/3-540-45336-9_16
  64. [64]Y. Kuno, N. Shimada and Y. Shirai, “Look where you’re going: a robotic wheelchair based on the integration of human and environmental observations”, IEEE Robotics and Automation, 10(1), pp 26-34 (2003).
  65. [65]A. Ronzhin and A. Karpov, “Assistive multimodal system based on speech recognition and head tracking”, Proceedings of 13th European Signal Processing Conference, Antalya (2005).
  66. [66]M. Pantic, A. Pentland, A. Nijholt and T. Huang, “Human computing and machine understanding of human behavior: a survey” Proceedings of the 8th International Conference on Multimodal Interfaces, Banff, Alberta, Canada, pp 239-248 (2006).
  67. [67]A. Kapoor, W. Burleson and R.W. Picard, “Automatic prediction of frustration”, International Journal of Human-Computer Studies, 65, pp 724-736 (2007).10.1016/j.ijhcs.2007.02.003
  68. [68]H. Gunes and M. Piccardi, “Bi-modal emotion recognition from expressive face and body gestures”, Journal of Network and Computer Applications, 30, pp 1334-1345 (2007).10.1016/j.jnca.2006.09.007
  69. [69]C. Busso, Z. Deng, S. Yildirim, M. Bulut, C.M. Lee, A. Kazemzadeh, S. Lee, U. Neumann and S. Narayanan, “Analysis of emotion recognition using facial expressions, speech and multimodal information”, Proceedings of the 6th International Conference on Multimodal Interfaces, State College, PA, USA, pp 205-211 (2004).10.1145/1027933.1027968
  70. [70]M. Johnston, P.R. Cohen, D. McGee, S.L. Oviatt, J.A. Pittman and I. Smith, “Unification-based multimodal integration”, Proceedings of the Eighth Conference on European Chapter of the Association for Computational Linguistics, pp 281-288 (1997).10.3115/979617.979653
  71. [71]D. Perzanowski, A. Schultz, W. Adams, E. Marsh and M. Bugajska, “Building a multimodal human-robot interface”, Intelligent Systems, IEEE, 16, pp 16-21 (2001).10.1109/MIS.2001.1183338
  72. [72]H. Holzapfel, K. Nickel and R. Stiefelhagen, “Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures”, Proceedings of the 6th International Conference on Multimodal Interfaces, pp 175-182 (2004).10.1145/1027933.1027964
  73. [73]Brown University, Biology and Medicine, “Robotic Surgery: Neuro-Surgery”, http://biomed.brown.edu/Courses/BI108/BI108_2005_Groups/04/neurology.html, visited on 15/10/2007.
Language: English
Page range: 137 - 159
Published on: Dec 13, 2017
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2017 Fakhreddine Karray, Milad Alemzadeh, Jamil Abou Saleh, Mo Nours Arab, published by Professor Subhas Chandra Mukhopadhyay
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.