P. K. Atrey, M. A. Hossain, A. El Saddik, and M. S. Kankanhalli, “Multimodal fusion for multimedia analysis: a survey,” Multimedia Systems, vol. 16, pp. 345-379, 2010.10.1007/s00530-010-0182-0
H. Djenidi, S. Benarif, A. Ramdane-Cherif, C. Tadj, and N. Levy, “Generic multimedia multimodal agents paradigms and their dynamic reconfiguration at the architectural level,” EURASIP J. Appl. Signal Process., vol. 2004, pp. 1688-1707, 2004.
S. Oviatt, P. Cohen, L. Wu, J. Vergo, L. Duncan, B. Suhm, J. Bers, T. Holzman, T. Winograd, J. Landay, J. Larson, and D. Ferro, “Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions,” in Human-Computer Interaction. vol. 15, ed: Lawrence Erlbaum Associates, 2000, pp. 263–322.
R. Raisamo, A. Hippula, S. Patomaki, E. Tuominen, V. Pasto, and M. Hasu, “Testing usability of multimodal applications with visually impaired children,” MultiMedia, IEEE, vol. 13, pp. 70-76, 2006.10.1109/MMUL.2006.68
M. Debevc, P. Kosec, M. Rotovnik, and A. Holzinger, “Accessible Multimodal Web Pages with Sign Language Translations for Deaf and Hard of Hearing Users,” in Database and Expert Systems Application, 2009. DEXA ‘ 09. 20th International Workshop on, 2009, pp. 279283.10.1109/DEXA.2009.92
J. Lai, S. Mitchell, and C. Pavlovski, “Examining modality usage in a conversational multimodal application for mobile e-mail access,” International Journal of Speech Technology, vol. 10, pp. 17-30, 2007/03/01 2007.10.1007/s10772-009-9017-9
M. C. Caschera, A. D’Andrea, A. D’Ulizia, F. Ferri, P. Grifoni, and T. Guzzo, “ME: Multimodal Environment Based on Web Services Architecture,” presented at the OTM 2009 Workshops, Vilamoura, Portugal, 2009. pp 504-512.10.1007/978-3-642-05290-3_63
A. Karpov, A. Ronzhin, I. Kipyatkova, and L. Akarun, “Multimodal Human Computer Interaction with MIDAS Intelligent Infokiosk,” in Pattern Recognition (ICPR), 2010 20th International Conference on, 2010, pp. 3862-3865.10.1109/ICPR.2010.941
G. P. Laput, M. Dontcheva, G. Wilensky, W. Chang, A. Agarwala, J. Linder, and E. Adar, “PixelTone: a multimodal interface for image editing,” presented at the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Paris, France, 2013.10.1145/2470654.2481301
A. Zaguia, M. D. Hina, C. Tadj, and A. Ramdane-Cherif, “Using Multimodal Fusion in Accessing Web Services,” Journal of Emerging Trends in Computing and Information Sciences, vol. 1, pp. 121-138, 2010.
D. Costa and C. Duarte, “Adapting Multimodal Fission to User’s Abilities,” in Universal Access in Human-Computer Interaction. Design for All and eInclusion. vol. 6765, C. Stephanidis, Ed., ed: Springer Berlin Heidelberg, 2011, pp. 347-356.
D. Perroud, L. Angelini, O. Abou Khaled, and E. Mugellini, “Context-Based Generation of Multimodal Feedbacks for Natural Interaction in Smart Environments,” in AMBIENT 2012, The Second International Conference on Ambient Computing, Applications, Services and Technologies, 2012, pp. 19-25.
Honold, Frank, Felix Schussel, and Michael Weber. “The Automated Interplay of Multimodal Fission and Fusion in Adaptive HCI.”Intelligent Environments (IE), 2014 International Conference on. IEEE, 2014.10.1109/IE.2014.32
Schnelle-Walka, Dirk, Stefan Radomski, and Max Mühlhäuser. “Multimodal Fusion and Fission within W3C Standards for Nonverbal Communication with Blind Persons.”Computers Helping People with Special Needs. Springer International Publishing, 2014. 209-213.10.1007/978-3-319-08596-8_33
A. Benoit, L. Bonnaud, A. Caplier, P. Ngo, L. Lawson, D. G. Trevisan, V. Levacic, C. Mancas, and G. Chanel, “Multimodal focus attention and stress detection and feedback in an augmented driver simulator,” Personal and Ubiquitous Computing, vol. 13, 2009. pp 33-41.10.1007/s00779-007-0173-0
A. Zaguia, A. Wahbi, M. Miraoui, C. Tadj, and A. Ramdane-Cherif, “Modeling Rules Fission and Modality Selection Using Ontology,” Journal of Software Engineering and Applications, vol. 7, pp. 354-371, 2013.10.4236/jsea.2013.67045
A. Zaguia, A. Wahbi, C. Tadj, and A. Ramdane-Cherif, “Multimodal Fission For Interaction Architecture,” Journal of Emerging Trends in Computing and Information Sciences, vol. 4, February 2013
A. Zaguia, M. D. Hina, C. Tadj, and A. Ramdane-Cherif, “Interaction context-aware modalities and multimodal fusion for acessing web services,” Ubiquitous Computing and Communication Journal, vol. 5, N 4, pp.1-15, 2010.
Q. Ji, Z. Zhu, and P. Lan, “Real-time nonintrusive monitoring and prediction of driver fatigue,” Vehicular Technology, IEEE Transactions on, vol. 53, pp. 1052-1068, 2004.
Zhang, Fei, Wuying Liu, and Yude Bi. “Review on Wordnet-based ontology construction in China.” International Journal on Smart Sensing and Intelligent Systems 6.2 (2013). pp.630647.
Guessoum, Djamel, Moeiz Miraoui, and Chakib Tadj. “SURVEY OF SEMANTIC SIMILARITY MEASURES IN PERVASIVE COMPUTING.” International Journal on Smart Sensing and Intelligent Systems 8.1 (2015). pp 125 – 158.10.21307/ijssis-2017-752
Pavlakos, Georgios, Stavros Theodorakis, Vassilis Pitsikalis, S. Katsamanis, and Petros Maragos. “Kinect-based multimodal gesture recognition using a two-pass fusion scheme.” In Proc. Int’l Conf. on Image Processing. 2014. pp. 1495 – 1499.10.1109/ICIP.2014.7025299