A Unified Neural Timecourse for Words, Phrases, and Sentences: MEG Evidence from Parallel Presentation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent behavioral and neural research on reading shows that humans can extract syntactic structure from short sentences within a fraction of a second—faster than many estimates for recognizing the meaning of a single word. This challenges a core assumption of many language processing models: that combinatory operations depend on prior lexical access. Further, studies using parallel presentation of full sentences have revealed electrophysiological responses remarkably similar to those well established for single words. This raises the question of whether words, phrases, and sentences all move through the same processing stages, regardless of syntactic complexity. Using magnetoencephalography (MEG), we examined how single words, phrases, and sentences are processed when all visual information is available at once. Across all three levels, we observed highly similar waveform dynamics, with early responses reflecting bottom-up detection of form followed by activity in the left anterior and posterior temporal cortices and vmPFC consistent with combinatory processing. Of these regions, the left anterior temporal lobe (LATL) showed effects of bigram frequency suggestive of serial left-to-right dynamics. Together, these results support a Global-to-Serial Assembly (GLOSA) model in which the brain first detects the global form of the stimulus in a snapshot-like manner and then probes its combinatory properties through partially serial processes.

Article activity feed