Have a personal or library account? Click to login
Mental Simulations of Phonological Representations Are Causally Linked to Silent Reading of Direct Versus Indirect Speech Cover

Mental Simulations of Phonological Representations Are Causally Linked to Silent Reading of Direct Versus Indirect Speech

By: Bo Yao  
Open Access
|Jan 2021

Figures & Tables

Table 1

Example stimulus item with three conditions in Experiment 1.

Giovanni De Luca owns an independent Italian coffee shop that proves very popular in Didsbury. Entrepreneur Alexander J. Jones wants to invest in Giovanni’s coffee and asks him his coffee-making secret.
DSGiovanni laughs and explains, “You can only make a proper cup of coffee from a proper copper coffee pot.”
ISGiovanni laughs and explains that one can only make a proper cup of coffee from a proper copper coffee pot.
NSGiovanni laughs and shows him that one can only make a proper cup of coffee from a proper copper coffee pot.
The entrepreneur realises that he needs more cash to invest in these copper coffee pots.

[i] DS = direct speech;

IS = indirect speech;

NS = non-speech.

Table 2

Mean first-pass reading times and SDs (in ms) across experimental conditions in Experiment 1.

REPORTING STYLENMEAN FPRTSD
Direct Speech40720791290
Indirect Speech40418591241
Non-Speech40418671230

[i] N = no. of observations;

fpRT = first-pass reading time;

SD = standard deviation.

Table 3

Generalised linear mixed-effect model estimates of first-pass reading times for Experiment 1.

FIXED EFFECTSBS.E.TP
Intercept203114.1143.9<.001
IS – DS–16620.2–8.2<.001
NS – DS–15614.6–10.7<.001

[i] DS = Direct Speech;

IS = Indirect Speech;

NS = Non-Speech.

P-values were calculated using Satterthwaite approximations (lmerTest package).

joc-4-1-141-g1.png
Figure 1

Model-estimated effects of Reporting Style on first-pass reading times in Experiment 1. The error bars represent 95% confidence intervals.

Table 4

Mean first-pass oral reading times and SDs (in ms) across experimental conditions in Experiment 2.

REPORTING STYLENMEAN FPRTSD
Direct Speech19130741159
Indirect Speech19232141331
Non-Speech1923095972

[i] N = no. of observations;

fpRT = first-pass reading time;

SD = standard deviation.

Table 5

Generalised linear mixed-effect model estimates of first-pass oral reading times for Experiment 2.

FIXED EFFECTSBS.E.TP
Intercept342437.890.6<.001
IS – DS11626.74.3<.001
NS – DS6447.71.3.181

[i] DS = Direct Speech;

IS = Indirect Speech;

NS = Non-Speech.

P-values were calculated using Satterthwaite approximations (lmerTest package).

joc-4-1-141-g2.png
Figure 2

Model-estimated effects of Reporting Style on first-pass oral reading times in Experiment 2. The error bars represent 95% confidence intervals.

Table 6

Mean first-pass reading times and SDs (in ms) across experimental conditions in Experiment 3.

INTERFERENCEREPORTING STYLENMEAN FPRTSD
PhonologicalDirect Speech3091474833
PhonologicalIndirect Speech3101418965
ManualDirect Speech30618961162
ManualIndirect Speech30217151152

[i] N = no. of observations;

fpRT = first-pass reading time;

SD = standard deviation.

Table 7

Generalised linear mixed-effect model estimates of first-pass reading times for Experiment 3.

FIXED EFFECTSBS.E.TP
Intercept162415.1107.3<.001
PI-MI–32521.4–15.2<.001
DS-IS16616.410.1<.001
PI-MI x DS-IS–20712.3–16.8<.001

[i] PI = Phonological Interference;

MI = Manual Interference;

DS = Direct Speech;

IS = Indirect Speech.

P-values were calculated using Satterthwaite approximations (lmerTest package).

joc-4-1-141-g3.png
Figure 3

Model-estimated effects of Interference and Reporting Style on first-pass reading times in Experiment 3. The error bars represent 95% confidence intervals.

DOI: https://doi.org/10.5334/joc.141 | Journal eISSN: 2514-4820
Language: English
Submitted on: Apr 19, 2020
|
Accepted on: Nov 5, 2020
|
Published on: Jan 8, 2021
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2021 Bo Yao, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.