Tracking eye gaze during cued speech perception
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
For many deaf people, lip-reading plays a major role in verbal communication. However, lip reading is inherently ambiguous and does not allow for a complete understanding of speech. The consequences of these limitations are significant, potentially impeding language, cognitive, and social development. Cued speech (CS) was developed to eliminate this ambiguity by supplementing lip-reading with hand gestures, giving access to the entire phonological content of speech through the visual modality alone. Despite its documented efficacy in enhancing linguistic and communicative abilities, the mechanisms of CS perception remain largely unknown. The present study is the first to examine eye movements during CS perception, with a sample of deaf CS users, hearing CS users, and hearing naive controls. We presented silent videos of words, pseudowords, and sentences in their CS form, while recording the participants’ eye movements. All groups fixated almost exclusively on the face, and predominantly on the lips of the speaker, despite the effective processing of CS gestures by CS users. Deaf and hearing participants differed strikingly in the way fixation was distributed between the left and right halves of the face. While both hearing groups mostly fixated the left side of the speaker’s face, deaf participants showed a more symmetrical pattern. Finally, in CS users, stimuli that were phonologically, lexically or semantically more difficult tended to increase fixation in the inferior and left sector of the face. Apart from reading, CS stands as the sole system for visually conveying full phonological information of a spoken language. This study elucidates the fundamental behavioral tuning that facilitates the efficient recovery of phonology in this distinctive form.