A magnitude-independent neural code for linguistic information during sentence production

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Humans are the only species with the ability to systematically combine words to convey an unbounded number of complex meanings. This process is guided by combinatorial processes thought to be unique to our species. Despite their centrality to human cognition, the neural mechanisms underlying these systems remain obscured by inherent limitations of non-invasive brain measures and a near total focus on comprehension paradigms. Here, we address these limitations with high-resolution neurosurgical recordings (electrocorticography) and a controlled sentence production experiment. We uncover distinct cortical networks encoding word-level and higher-order information. These networks exhibited a hybrid spatial organization: broadly distributed across traditional language areas, but with focal concentrations of sensitivity to semantic and structural contrasts in canonical language regions. In contrast to previous findings from comprehension studies, we find that these networks are largely non-overlapping, each processing information associated with one of three linguistic contrasts. Most strikingly, our data reveal an unexpected property of higher-order linguistic information: it is encoded independent of neural activity levels. These results show that activity magnitude and information content are dissociable, with important implications for studying the neurobiology of language.

Article activity feed