Crosslinguistic Coordination of Overt Attention and Speech Production as Evidence for a Language of Vision
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
A central question in cognition is how representations are integrated across different modalities such as language and vision. One prominent hypothesis posits the existence of an abstract, pre-linguistic language of vision that organises meaning compositionally to enable cross-modal integration. This hypothesis predicts that the language of vision operates universally, independent of linguistic surface features such as word order. We conducted eye-tracking experiments where participants described visual scenes in English, Portuguese, and Japanese. By analysing eye-movement sequences alongside spoken descriptions, we demonstrate that semantic similarity between sentences strongly predicts the similarity of associated scan patterns in all three languages, even across scenes and for sentences in different languages. In contrast, the effect of syntactic similarity was found to be language-dependent and restricted to scene context. Our findings support a universal language of vision as an organising principle of meaning that transcendssyntactic structures.