Modelling Audio-Visual Reaction Time with Recurrent Mean-Field Networks
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Understanding how the brain integrates multisensory information during detection and decision-making remains an active area of research. While many inferences have been drawn about behavioural outcomes, key questions persist regarding both the nature of environmental cues and the internal mechanisms of integration. These complexities make multisensory integration particularly well suited to investigate through mathematical modelling. In this study, we present three models of audio-visual integration within a biologically motivated mean-field recurrent framework. These models extend a non-linear system of differential equations originally developed for unisensory decision-making. The OR and SUM models represent opposing ends of the integration spectrum: the former simulates independent unisensory processing using a winner-take-all (WTA) strategy, while the latter implements a linear summation model for full integration. A third model—the REPEAT Model—incorporates switch and repeat costs observed in multisensory tasks. We simulate 121 participants with varying unisensory evidence accumulation rates, capturing behavioural diversity from modality dominance to balanced integration. Model outputs (reaction time and accuracy) were compared with empirical results from audio-visual detection tasks. We further fit the outputs to a drift diffusion model, allowing comparison between simulated and theoretically optimal multisensory drift rates. The OR and SUM Models reproduced established unisensory response patterns. Drift diffusion analysis revealed suboptimal integration in the OR Model and optimal integration in the SUM Model. However, the SUM Model also produced supra-optimal responses under certain conditions, inconsistent with behavioural data. The REPEAT Model successfully captured the role of priming in sensory repetition effects, distinguishing it from true multisensory integration. Overall, these models highlight how biologically grounded mathematical frameworks can shed light on the mechanisms underlying multisensory integration, particularly the nuanced contributions of modality repetition and integration efficiency.