A stereoencephalography study of internal speech
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
With advances in intracranial recording techniques, converting speech-related brain activity into commands for brain-computer interfaces (BCIs) is becoming increasingly feasible. In this study, we explored the utility of stereoencephalography (sEEG) for decoding covert and overt speech processes in humans. sEEG data were collected from 11 epilepsy patients undergoing presurgical monitoring, while they performed a set of speech tasks—overt speech, articulated speech, and imagined (inner) speech—as well as a handwriting task. Time- and frequency-domain analyses revealed that each speech condition elicited distinct spectral modulations, particularly in the alpha (8–12 Hz) and gamma (50–80 Hz) bands across temporal and frontal regions. Inner speech exhibited reduced and delayed activation in key motor and language-related areas, distinguishing it from both overt and articulated speech. In contrast, handwriting evoked a different pattern of rhythmic dynamics, marked by gamma desynchronization and more sustained alpha increases. A machine learning classifier achieved an average accuracy of 72% in distinguishing the three speech conditions based on their spectral profiles. These findings support the view that imagined speech is a neurally distinct phenomenon and demonstrate that sEEG can reliably detect the transitions between internal and overt forms of speech. This work contributes to the development of multimodal BCIs capable of decoding covert language representations, with implications for restoring communication in individuals with severe motor or speech impairments.