Have a personal or library account? Click to login
Language Experience Predicts Eye Movements During Online Auditory Comprehension Cover

Language Experience Predicts Eye Movements During Online Auditory Comprehension

Open Access
|Jun 2023

Abstract

Experience-based theories of language processing suggest that listeners use the properties of their previous linguistic input to constrain comprehension in real time (e.g. MacDonald & Christiansen, 2002; Smith & Levy, 2013; Stanovich & West, 1989; Mishra, Pandey, Singh, & Huettig, 2012). This project investigates the prediction that individual differences in experience will predict differences in sentence comprehension. Participants completed a visual world eye-tracking task following Altmann and Kamide (1999) which manipulates whether the verb licenses the anticipation of a specific referent in the scene (e.g. The boy will eat/move the cake). Within this paradigm, we ask (1) are there reliable individual differences in language-mediated eye movements during this task? If so, (2) do individual differences in language experience correlate with these differences, and (3) can this relationship be explained by other, more general cognitive abilities? Study 1 finds evidence that language experience predicts an overall facilitation in fixating the target, and Study 2 replicates this effect and finds that it remains when controlling for working memory, inhibitory control, phonological ability, and perceptual speed.

 

Publisher’s Note: A correction article relating to this paper has been published and can be found at https://journalofcognition.org/articles/10.5334/joc.354.

DOI: https://doi.org/10.5334/joc.285 | Journal eISSN: 2514-4820
Language: English
Submitted on: Feb 1, 2023
|
Accepted on: Jun 3, 2023
|
Published on: Jun 28, 2023
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2023 Ariel N. James, Colleen J. Minnihan, Duane G. Watson, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.