Neural specialization for ‘visual’ concepts emerges in the absence of vision

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Vision provides a key source of information about many concepts, including ‘living things’ (e.g., tiger ) and visual events (e.g., sparkle ). According to a prominent theoretical framework, neural specialization for different conceptual categories is shaped by sensory features, e.g., living things are neurally dissociable from navigable places because living things concepts depend more on visual features. We tested this framework by comparing the neural basis of ‘visual’ concepts across sighted (n=22) and congenitally blind (n=21) adults. Participants judged the similarity of words varying in their reliance on vision while undergoing fMRI. We compared neural responses to living things nouns (birds, mammals) and place nouns (natural, manmade). In addition, we compared visual event verbs (e.g., ‘sparkle’) to non-visual events (sound emission, hand motion, mouth motion). People born blind exhibited distinctive univariate and multivariate responses to living things in a temporo-parietal semantic network activated by nouns, including the precuneus (PC). To our knowledge, this is the first demonstration that neural selectivity for living things does not require vision. We additionally observed preserved neural signatures of ‘visual’ light events in the left middle temporal gyrus (LMTG+). Across a wide range of semantic types, neural representations of sensory concepts develop independent of sensory experience.

Significance Statement

Vision offers a key source of information about major conceptual categories, including animals and light emission events. Comparing neural signatures of concepts in congenitally blind and sighted people tests the contribution of visual experience to conceptual representation. Sighted and congenitally blind participants heard ‘visual’ nouns (e.g., ‘tiger’) and verbs (e.g., ‘sparkle’), as well as less visual nouns (e.g., ‘barn’) and verbs (e.g., ‘squeak’) while undergoing fMRI. Contrary to previous claims, both univariate and multivariate approaches reveal similar representations of animals and light emission verbs across groups. Across a broad range of semantic types, ‘visual’ concepts develop independent of visual experience. These results challenge theories that emphasize the role of sensory information in conceptual representation.

Article activity feed