Cross-modal semantic context effects in less experienced and advanced readers

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

AbstractPurpose: Children are in the process of developing the lexical (i.e., whole-word) orthographic expertise necessary to recognize written words. This involves complex visual analysis of written words which may be vulnerable to crowding effects that impair low-level visual processes like letter identification and perception. Given that spoken language skills precede learning to read, more developed (oral) language comprehension skills could compensate for immature (visual) orthographic processing skills during reading development.Methods: This study investigated whether children at two different stages of reading development, and thus differing in lexical orthographic expertise, could rely on their oral language comprehension skills to support the orthographic processes involved recognizing written words. We examined whether the a congruent auditory sentence context mitigated the negative effect of visual crowding (as manipulated by presenting words with decreased vs standard letter spacing) in an orthographic lexical decision paradigm, in Grade 3 (N=47, M=8.37 years, SD=0.28, 27 girls) and Grade 5 (N=45, M=10.66 years, SD=0.33, 27 girls) typical readers. Results: The results showed stronger auditory sentence context effects on visual word recognition in advanced developing readers in Grade 5 compared to less experienced developing readers in Grade 3. In addition, the magnitude of crowding effects on visual word recognition was modulated by auditory sentence context in Grade 5, with a smaller effect of crowding after congruent compared to incongruent sentences.Conclusion: The findings suggest that once the visual system is tuned for reading through experience, cross-modal interactions across levels of language and perceptual modalities support visual word recognition.

Article activity feed