Causal inference shapes crossmodal postdictive perception within the temporal window of multisensory integration
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In our environment, stimuli from different sensory modalities are processed within a temporal window of multisensory integration that spans several hundred milliseconds. During this window, the processing and perception of stimulus are influenced not only by preceding and current information, but also by input that follows the presentation of the stimulus. To date, the computational mechanisms underlying crossmodal backward processing, which we refer to as crossmodal postdiction, are not well understood. In this study, we examined crossmodal postdiction in the audiovisual (AV) rabbit illusion, in which postdiction occurs when flash-beep pairs are presented shortly before and shortly after a single flash or a single beep. We collected behavioral data from 32 human participants and fitted four competing models: a Bayesian causal inference (BCI) model, a forced-fusion (FF) model, forced-segregation (FS) model, and a non-postdictive BCI (BCI-NP) model. The BCI model fit the data well and outperformed the other models. Building on previous findings demonstrating causal inference during non-postdictive multisensory integration, our study shows that the BCI framework is also an effective means of explaining crossmodal postdiction. Observers accumulate causal evidence that can retroactively influence the crossmodal perception of preceding sensory stimuli following the causal decision. Our study demonstrates that the AV rabbit illusion forms within a temporal window of multisensory integration that encompasses past, present, and future sensory inputs, and that this integration can be effectively explained by the Bayesian causal inference framework.