Separable neurocomputational mechanisms underlying multisensory learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Efficient control of behavior requires multisensory learning from information distributed across senses. However, most neurocomputational studies have focused on unisensory signals. Here, we identify distinct but interacting neurocomputational mechanisms that support learn-ing of multisensory associations. We designed a task in which behaviorally relevant information was available only from combinations of visual cues with either auditory or tactile cues. In 58 participants undergoing fMRI, we dissociated three processes: multisensory statistical learning (SL), modeled as stimulus-locked Shannon surprise; reinforcement learning (RL), modeled as feedback-locked signed reward prediction errors (RPEs); and feedback-locked unsigned RPEs (uRPEs), reflecting surprise about reward outcomes. Behaviorally, response times scaled with Shannon surprise (SL) while accuracy improved with feedback (RL). Model-based fMRI re-vealed dissociable but complementary networks: RPEs engaged ventral striatum, vmPFC, and left angular gyrus; surprise recruited bilateral angular gyrus, dlPFC, and precuneus; and uRPEs involved insula, dorsomedial prefrontal, and lateral frontoparietal cortices. Several of these regions are not typically implicated in unisensory studies, suggesting specialization for multisensory learning. All three networks were modality-general, i.e., they showed comparable strength for audiovisual and visuotactile learning. Notably, left angular gyrus tracked both Shannon surprise and RPE, identifying it as a potential hub for integrating structural and value information. These findings reveal that the brain engages distinct but complementary systems for structure-based, reward-based, and outcome-surprise computations. By combining behavioral modeling and fMRI with a novel task design, we provide a principled framework for dissecting the neurocomputational architecture of multisensory learning.

Article activity feed