Neural synchrony is “good enough” for speech comprehension
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Recent evidence indicates that neural populations exhibit synchronous firing at phrase boundaries to facilitate the encoding of syntactic units during speech comprehension. However, good-enough processing accounts of speech comprehension suggest that detailed syntactic analysis may not always be necessary for successful interpretation, especially when listeners can deduce meaning from lexical-semantic contexts. In this brief report, we evaluate this notion and assess whether neural synchrony to syntactic boundaries is modulated by local lexical-semantic content. To this end, we reanalyzed an open-source EEG dataset, consisting of brain recordings obtained while participants passively listened to an audiobook. To determine neural synchrony to phrase boundaries, we computed mutual information (MI) between delta band EEG activity (< 3Hz) and hierarchically derived syntactic structures for each sentence in the audiobook. We then quantified local-lexical semantic contexts using semantic dissimilarity values that were derived from high-dimensional vectors of co-occurrence. We then regressed MI values for each sentence against the sentence’s semantic dissimilarity values, using linear mixed-effects models. Results indicated that neural synchrony to phrase boundaries showed a positive linear relationship with semantic dissimilarity. We interpret this finding as evidence that listeners’ reliance on syntactic information during speech comprehension is modulated by local lexical-semantic contexts, consistent with good-enough processing accounts.