Encoding of speech modes and loudness in ventral precentral gyrus

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The ability to vary the mode and loudness of speech is an important part of the expressive range of human vocal communication. However, the encoding of these behaviors in the ventral precentral gyrus (vPCG) has not been studied at the resolution of neuronal firing rates. We investigated this in two participants who had intracortical microelectrode arrays implanted in their vPCG as part of a speech neuroprosthesis clinical trial. Neuronal firing rates modulated strongly in vPCG as a function of attempted mimed, whispered, normal or loud speech. At the neural ensemble level, mode/loudness and phonemic content were encoded in distinct neural subspaces. Attempted mode/loudness could be decoded from vPCG with an accuracy of 94% and 89% for two participants respectively, and corresponding neural preparatory activity could be detected hundreds of milliseconds before speech onset. We then developed a closed-loop loudness decoder that achieved 94% online accuracy in modulating a brain-to-text speech neuroprosthesis output based on attempted loudness. These findings demonstrate the feasibility of decoding mode and loudness from vPCG, paving the way for speech neuroprostheses capable of synthesizing more expressive speech.

Article activity feed