Decoding state specific connectivity during speech production and perception

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Understanding how dynamic brain networks support language perception and production is central to cognitive neuroscience. A vast network based literature has employed functional connectivity (FC), primarily using resting-state and task-based fMRI. However, methodological limitations have hindered this approach in language processing, particularly during speech production. Here, we address this gap by employing a large cohort of electrocorticographic (ECoG) patients (N=42) to investigate the networks driving speech perception and production. We acquired data while patients were engaged in a controlled battery of speech production tasks focusing on five cognitive states (auditory perception, picture perception, reading perception, speech production, and baseline). Using linear classifiers we were able to robustly decode cognitive states from single-trial FC (i.e. Pearson correlations) of the neural activity patterns, achieving a mean accuracy of 64.4%. These classifiers revealed distinct network signatures underlying auditory and visual perception as well as speech production via stable network connectivity. Importantly, the network signatures included both regions with robust local neural activity and those with minimal or no detectable activation. Such signatures indicate that even low-activity regions contribute critically to differentiating cognitive states. Our findings underscore the significance of functional connectivity analysis as a complementary dimension to investigating local neural activity, and suggest that the functional networks supporting speech extend beyond the most metabolically active regions.

Article activity feed