Gaze guides language comprehension in adults in a simulated social environment
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Multimodal cues are crucial for real-life language comprehension. While gaze-following is a well-established mechanism in early language acquisition, its role in adult language comprehension - especially under challenging conditions - remains relatively unexplored. Here, we used a novel immersive and embodied virtual reality (VR) paradigm to examine whether referential gaze facilitates comprehension of unfamiliar spoken words in a noisy environment. Participants interacted with virtual agents whose gaze behavior varied in referential informativeness. Explicit feedback ensured that gaze-guided comprehension outcomes could be disentangled from informational content. Behavioral and eye-tracking data showed that participants identified spoken input more accurately when the teacher’s gaze reliably signaled the intended referent, and when they actively followed it. This facilitative effect of gaze emerged early and persisted across tasks. These findings highlight the enduring role of social gaze mechanisms in adult language comprehension, particularly in ambiguous or noisy contexts. They have important implications for a multimodal and situated language processing perspective, showing that gaze-following plays a meaningful role in real-world comprehension.