Perception and Encoding of Narrative Events During Continuous Speech Listening in Background Noise

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Everyday speech comprehension involves interpreting overarching themes as they unfold continuously overtime and structuring experiences for effective encoding and future recall. Characterizing downstream cognition, particularly how the brain perceptually organizes and encodes information, can help elucidate some of the mechanisms underlying listening effort. Recent advances in memory research highlight event segmentation, the process of identifying distinct events within dynamic environments, as a core mechanism to how we perceive, encode, and recall experiences. In the current study, we examine how background noise affects event segmentation during speech listening and its downstream effects on memory. Participants listened to and segmented narratives presented at varying signal-to-noise ratios (clear, +2 dB SNR, –4 dB SNR) and subsequently completed a free recall task. Increasing background noise reduced segmentation consistency and recall accuracy, indicating that challenging acoustics disrupt perceptual organization and encoding; however, listeners continued to identify meaningful event boundaries even when intelligibility declined by approximately 30%. Our analyses further suggest that segmentation was predominantly anticipatory, with listeners marking event boundaries towards the end of an event, suggesting a proactive updating of event models rather than a reaction to a prediction error. Additionally, segmentation-recall coupling was strongest under moderate noise, implying that moderate listening difficulty may enhance engagement. Ultimately, these findings demonstrate that while the adverse conditions impair detailed encoding, the cognitive mechanisms that structure experience remain robust, offering insight into how listening effort shapes perception and memory in complex, real-world speech.

Article activity feed