A common algorithm for confidence judgements across visual, auditory and audio-visual decisions
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Most studies investigating the computational basis of decision confidence have focused on simple visual perceptual tasks, leaving open questions about how confidence is formed in decisions involving other sensory modalities or those requiring the integration of information across modalities. To address these gaps, we used computational modelling to analyse confidence judgements in perceptual decisions involving visual, auditory, and audio-visual stimuli. Drawing on research into visual confidence, we adapted models from the literature to evaluate their fit to our data, comparing three popular classes: unscaled evidence strength, scaled evidence strength, and Bayesian models. Our results show that the scaled evidence strength models consistently outperformed the other model classes across all tasks and could also be used to predict behaviour in the audio-visual task from the unidimensional auditory and visual model fits. These findings suggest that confidence judgements across different perceptual decisions rely on a shared algorithm that dynamically accounts for both sensory uncertainty and evidence strength, without the computation of posterior probabilities. Additionally, we investigated the algorithms used for multidimensional (audio-visual) confidence judgements specifically, showing that participants integrated both the visual and auditory dimensions of the stimulus, rather than relying solely on the most informative modality, and used a modality-independent measure of sensory uncertainty to adjust their confidence. Overall, our findings provide evidence for a common algorithm underlying confidence judgements across modalities and demonstrate the broad applicability of the scaled evidence strength algorithm, even in tasks requiring the integration of distinct sensory information.