Bear, H. L., & Harvey R. (2017). Phoneme-to-viseme mappings: the good, the bad, and the ugly. Speech Communication, 95, 40–67. <a href="https://doi.org/10.1016/j.specom.2017.07.001" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.1016/j.specom.2017.07.001</a>
Bozkurt, E., Erdem, C. E., Erzin E., Erdem, T., & Ozkan, M. (2007). Comparison of phoneme and viseme based acoustic units for speech driven realistic lip animation. 2007 IEEE 15th Signal Processing and Communications Applications, SIU, 1-4, <a href="https://doi.org/10.1109/SIU.2007.4298572." target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.1109/SIU.2007.4298572.</a>
Chaume, F. (2004). Synchronisation in dubbing. A translational approach. In P. Orero (Ed.), Topics in Audiovisual Translation (pp. 38–50). John Benjamins Publishing Company.
Cutler, A. (2012). Native Listening: Language Experience and the Recognition of Spoken Words. MIT Press. <a href="https://doi.org/10.7551/mitpress/9012.001.0001" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.7551/mitpress/9012.001.0001</a>
Dodd, B. (1977). The role of vision in the perception of speech. Perception, 6, 31–40. <a href="https://doi.org/10.1068/p060031" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.1068/p060031</a>
Hughes, R. W., Vachon, F., & Jones, D. M. (2005). Disruption of short-term memory by changing and deviant sounds: Support for a duplex-mechanism account of auditory distraction. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(4), 913–927. <a href="https://doi.org/10.1037/0278-7393.33.6.1050" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.1037/0278-7393.33.6.1050</a>
Koverienė, I. (2015). Dubliavimas kaip audiovizualinio vertimo moda: anglų ir lietuvių kalbų garsynai vizualinės fonetikos kontekste. Vilnius: Vilniaus universitetas.
Koverienė, I., & Čeidaitė, K. (2020). Lip synchrony of rounded and protruded vowels and diphthongs in the Lithuanian-dubbed animated film ‘Cloudy with a Chance of Meatballs 2’, Respectus Philologicus, 38(43), 214–229. <a href="https://doi.org/10.15388/RESPECTUS.2020.38.43.69" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.15388/RESPECTUS.2020.38.43.69</a>
Massaro, D. W., & Simpson, J. A. (1987). Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry (1st ed.). Psychology Press. <a href="https://doi.org/10.4324/9781315808253" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.4324/9781315808253</a>
Massaro, D. W., Cohen, M. M. & Smeele, P. M. (1996). Perception of asynchronous and conflicting visual and auditory speech. The Journal of the Acoustical Society of America, 100, 1777–86. <a href="https://doi.org/10.1121/1.417342" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.1121/1.417342</a>
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97.
O’Neill, J. (1954). Contributions of the visual components of oral symbols to speech comprehension. Journal of Speech and Hearing Disorders, 19, 429–439. <a href="https://doi.org/10.1044/jshd.1904.429" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.1044/jshd.1904.429</a>
Pelson, R. O., & Prather, W. F. (1974). Effects of visual message-related cues, age and hearing impairment on speechreading performance. Journal of Speech and Hearing Research, 17, 518–525. <a href="https://doi.org/10.1044/jshr.1703.518" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.1044/jshr.1703.518</a>
Romanski, L. M., & Hwang, J. (2012). Timing of audiovisual inputs to the prefrontal cortex and multisensory integration. Neuroscience, 214, 36–48. <a href="https://doi.org/10.1016/j.neuroscience.2012.03.025" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.1016/j.neuroscience.2012.03.025</a>
Rosenblum, L. D., Yakel, D. A., & Green, K. P. (2000). Face and mouth inversion effects on visual and audiovisual speech perception. Journal of Experimental Psychology: Human Perception and Performance, 26(2), 806–819. <a href="https://doi.org/10.1037/0096-1523.26.2.806" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.1037/0096-1523.26.2.806</a>
Summerfield, Q. (1987). Some preliminaries to a comprehensive account of audiovisual speech perception. In B. Dodd & R. Campbell (Eds.), Hearing by Eye: The Psychology of Lip-Reading (pp. 3–51). Lawrence Erlbaum Associates, Inc.
Williams, J. J., Rutledge, J. C., Katsaggelos, A. K., & Garstecki D.C. (1998). Frame rate and viseme analysis for multimedia applications to assist speechreading. Journal of VLSI Signal Processing, 20, 7–23. <a href="https://doi.org/10.1023/A:1008062122135" target="_blank" rel="noopener noreferrer" class="text-signal-blue hover:underline">https://doi.org/10.1023/A:1008062122135</a>