Linguistic structure and language familiarity sharpen phoneme encoding in the brain

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

How does the brain turn a physical signal like speech into meaning? It draws on two key sources: linguistic structure (e.g., phonemes, syntax) and statistical regularities from experience. Yet how these jointly shape neural representations of language remains unclear. We used MEG to track phonemic and acoustic encoding during spoken language comprehension in native Dutch, Mandarin-Chinese, and Turkish speakers. Phoneme-level encoding was stronger during sentence comprehension than in word lists, and more robust within words than random syllables. Surprisingly, similar encoding emerged even in an uncomprehended language—but only with prior exposure. In contrast, acoustic edges were briefly suppressed early in comprehension. This suggests that the brain’s alignment to speech (in phase and power) is robustly tuned by structure and by learned statistical patterns. Our findings show how structured knowledge and experience-based learning interact to shape neural responses to language, offering insight into how the brain processes complex, meaningful signals.

Article activity feed