Cortical Representation of the Glottal Events during Speech Production

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

To produce complex motor behaviors such as speech, the nervous system must accommodate the fact that sensory feedback is delayed relative to actual movements; by most accounts, this is accomplished using an internal model that predicts the current state of the body’s periphery from recent motor system output. Here we show that onsets of events in the human glottal waveform, measured via electroglottography, are encoded in the human electroencephalogram (EEG) during speech production, maximally reflected at zero time lag. Conversely, we show that glottal event times can be decoded from the EEG. Furthermore, after prolonged exposure to delayed auditory feedback, subjects show a robust recalibration of their behaviorally observed threshold for detecting auditory-motor mismatches, and decoding models that perform best in normal speaking conditions also showed a shift in the predicted times of glottal events relative to actual events. This suggests that decoding performance is driven by plastic internal representations of peripheral event timing (while ruling out movement artifact concerns). Our results provide a missing component for a mechanism to associate specific feedback events with the neurons that gave rise to those movements, mirroring the observation of synchronous bursting of songbird HVC neurons with events in the trajectories of the biophysical parameters that control bird voicing.

Article activity feed