EEG reveals online monitoring mechanisms of speech production

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Speaking involves the orchestration of multiple speech muscles while actively monitoring sensory consequences through auditory and somatosensory feedback. A mistuned sensorimotor mechanism may disrupt the normal integration of motor and auditory brain systems in several developmental and acquired motor speech disorders, including stuttering, speech apraxia, speech sound disorders and dysarthria. Electroencephalography (EEG) provides a non-invasive measure of online neural activity with potential to assess (deficiencies in) sensorimotor integration during speech production. However, the relation between EEG and continuous speechoutput remains poorly characterized. Here, we investigate prediction of auditory speech output using multivariate EEG patterns under three levels of auditory masking. A decoding analysis was employed in combination with a lag-based approach that allowed studying predictions based on instantaneous EEG-speech relations, and their involvement in feedforward and feedback processes. For all masking conditions, we found consistent decoding in instantaneous lags and speech feedback lags, but not in feedforward lags. Furthermore, the level of auditory masking modulated decoding in both the instantaneous and feedback lags. Our results provide insights of neural monitoring during online speech production and offer a window to further study the dysfunction latent in motor speech disorders that may help in optimizing brain-informed therapies for speech fluency.

Article activity feed