Decoding the Unintelligible: Neural Speech Tracking in Low Signal-to-Noise Ratios
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Understanding speech in noisy environments is challenging for both human listeners and speech technologies, with significant implications for hearing aid design and communication systems. Auditory attention decoding (AAD) aims to decode the attended talker from neural signals to enhance their speech and improve perception. However, whether this decoding remains reliable under severely degraded listening conditions remains unclear. In this study, we investigated selective neural tracking of the attended speaker under adverse listening conditions. Using EEG recordings in a multi-talker speech perception task with varying SNR, participants’ task performance—quantified through a repeated-word detection task—was analyzed as a proxy for perceptual accuracy and attentional focus, while neural responses were used to decode the attended talker. Despite substantial degradation in task performance, we found that neural tracking of attended speech persists, suggesting that the brain retains sufficient information for decoding. These findings demonstrate that even in highly challenging conditions, AAD remains feasible, offering a potential avenue for improving speech perception in brain-informed audio technologies, such as hearing aids, that leverage AAD to enhance listening experiences in real-world noisy environments.