Can you feel what I am saying? Speech-based vibrotactile stimulation enhances the cortical tracking of attended speech in a multi-talker background
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In environments with multiple talkers, humans can tune in to a speaker of interest while ignoring competing voices. In such conditions, however, auditory cortices track the attended speech envelope rhythms (cortical tracking of speech, CTS) less accurately than in quiet, hindering intelligibility. Visual speech cues (e.g., lip movements) can enhance this CTS, but it remains unclear whether other non-auditory sensory cues, such as tactile input, provide comparable benefits through similar neural mechanisms. Here, using magnetoencephalography, we quantified syllabic (4-8 Hz) and phrasal (<1 Hz) CTS while participants attended to connected speech alone, together with synchronous or asynchronous speech-based vibrations, or with the corresponding speaker video, in both quiet and a multi-talker background noise. We hypothesized that, in noise, speech-based vibrotactile stimulation improves comprehension by enhancing CTS and modulating auditory-seeded functional brain connectivity with extra-auditory neocortical brain areas. Results revealed that synchronous vibrotactile stimulation improved comprehension in multi-talker noise and increased syllabic CTS at right auditory cortex, with this CTS increase magnitude correlating with comprehension performance. Audio-tactile CTS enhancement was accompanied by stronger beta-band auditory cortex connectivity with ipsilateral angular and ventral inferior temporal gyri, alongside reduced alpha-band coupling with the precuneus. These findings suggest that vibrotactile input can support speech-in-noise processing by impacting both local auditory cortical activity and auditory-seeded long-range functional connectivity.