Hidden Literacy in Minimally Verbal Autistic Individuals Revealed by Eye Gaze
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Minimally verbal autistic individuals (mvASD) are often presumed to have severe cognitive and language impairments based on their poor performance on standardized assessments requiring voluntary motor responses, such as pointing. However, emerging evidence suggests that these individuals may possess latent cognitive abilities. Here, we introduce the Cued Looking Paradigm (CLP), a novel eye-tracking method that bypasses motor requirements by capturing involuntary gaze responses to language-based stimuli. In our study, 35 minimally verbal autistic adolescents and adults were presented with spoken or written words, followed by a pair of images (target and foil) as their eye movements were recorded. The majority (83%) of mvASD participants demonstrated hidden receptive language and reading abilities, with eye gaze performance, including time course and spatial displacement, comparable to neurotypical controls. In contrast, the same mvASD individuals averaged only 57% accuracy when asked to point to the target, revealing a significant gap between reporting via pointing and actual lexical-semantic knowledge. Furthermore, pupil dilation analysis during tasks indicated reduced arousal recruitment in mvASD participants, potentially implicating dysregulation of the locus coeruleus-norepinephrine (LC-NE) system associated with the performance gap between pointing and eye-gaze. These findings challenge assumptions of global intellectual limitation while confirming specific lexical-semantic competence among mvASD individuals. Results highlight the need for- and provide alternative-assessments that bypass manual motor responses. The CLP shows promise for revealing cognitive and language abilities, with important implications for both research and education.
Significance Statement
Standard language tests assume that a person can point or speak. Although minimally verbal individuals can point, their accuracy is often variable, and pointing may not reliably reflect their comprehension. Using a simple eye-tracking task that replaces pointing with automatic gaze shifts, we demonstrate that most mvASD participants accurately match spoken or written words to pictures—even though they fail the same task when pointing is required. This finding overturns the long-standing belief that absence of speech equates to absence of understanding and reveals systematic bias in common assessments. Motor-free tools like the Cued Looking Paradigm, together with broader systematic modifications in assessment and treatment, could transform diagnosis, guide individualized education, and open new research avenues on covert language processing in neurodevelopmental conditions.