The neurocomputational mechanisms of hierarchical linguistic predictions during narrative comprehension
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Language comprehension requires a listener to predict the upcoming inputs of linguistic units with multiple timescales based on previous contexts, but how the prediction process is hierarchically represented and implemented in the human brain remains unclear. Combining the natural language processing (NLP) approach and functional Magnetic Resonance Imaging (fMRI) in a narrative comprehension task, we first applied the group-based general linear model (gGLM) to identify the neural underpinnings associated with the language prediction on word and sentence. Our results revealed a cortical architecture supporting the prediction, extending from the superior temporal cortices to the regions in the default mode network. Then, we investigated how these adjacent levels interact with each other by testing two rival hypotheses: the continuous updating hypothesis posits that the higher level of the representational hierarchy is continuously updated as inputs unfold over time, while the sparse updating hypothesis states that the higher level is only updated at the end of their preferred timescales of linguistic units. By conducting computational modeling and autocorrelation analysis, we found the sparse model outperformed the continuous model and the updating might occur at the sentence boundaries. Together, our results extend the linguistic prediction from the small timescales such as words to large timescales such as sentences, providing novel insights into the neurocomputational mechanisms of information updating within the linguistic prediction hierarchy.