Causal inference shapes crossmodal postdiction in multisensory integration
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In our environment, stimuli from different sensory modalities are processed within a temporal window of multisensory integration that spans several hundred milliseconds. During this window, stimulus processing is influenced not only by preceding and current information, but also by input that follows the stimulus. The computational mechanisms underlying crossmodal backward processing, which we refer to as crossmodal postdiction, are not well understood. We examined crossmodal postdiction in the audiovisual (AV) rabbit illusion, in which postdiction occurs when flash-beep pairs are presented shortly before and shortly after a single flash or a single beep. We collected behavioral data from 32 participants and fitted four competing models: a Bayesian causal inference (BCI), a forced-fusion, a forced-segregation, and a non-postdictive BCI model. The BCI model fit the data well and outperformed all other models. Building on findings demonstrating causal inference during non-postdictive multisensory integration, our study shows that the BCI framework is also effective in explaining crossmodal postdiction. Observers accumulate causal evidence that influences their perception of preceding stimuli following a causal decision. Our study shows that crossmodal postdiction in the AV rabbit illusion is formed within a temporal window of multisensory integration that encompasses past, present, and future input, which can be effectively explained by the BCI framework.