Audiovisual causal inference in implicit spatial representations

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The causal inference problem in multisensory perception describes the challenge our brains face in a multisensory environment: deciding whether sensory stimuli originate from a common source and should be integrated or from distinct sources and should be segregated. The brain addresses this problem by inferring causal structure from the spatiotemporal disparity of multisensory stimuli. However, it remains unclear whether the brain handles causal inference automatically (implicitly) or requires effortful cognitive processing (explicitly). In this study, we investigated how human observers (N = 47) process audiovisual information by computing implicit auditory spatial representations using a novel audiovisual distance estimation task analysed with multidimensional scaling. We compared implicit to explicit auditory spatial representations from three classical explicit auditory localisation and causal judgment tasks. We found that visual biases (i.e., the ventriloquist effect) in implicit auditory spatial representations were less informed by spatial disparity of the audiovisual stimuli compared to explicit representations, as fitted by a computational stochastic fusion model. Only in the explicit joint localisation and causal task, small spatial disparity increased the visual bias as predicted by a computational Bayesian causal inference model. Our results suggest that causal inference requires explicit cognitive processing that observers only apply if the causal structure of stimuli is directly relevant to the task. Otherwise, the brain relies on simpler automatic decision strategies such as stochastic fusion, possibly involving only lower regions of the cortical hierarchies.

Article activity feed