Shared and modality-specific brain networks underlying predictive coding of temporal sequences

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Predictive coding posits that the brain continuously generates and updates internal models to anticipate incoming sensory input. While auditory and visual modalities have been studied independently in this context, direct comparisons using matched paradigms are scarce. Here, we employed magnetoencephalography (MEG) to investigate how the brain of 83 participants encodes and consciously recognises temporally unfolding sequences that acquire Gestalt-like structure over time, a feature rarely addressed in cross-modal research. Participants memorised matched auditory and visual sequences with coherent temporal structure and later identified whether test sequences were familiar or novel. Multivariate decoding revealed robust discrimination between the brain mechanisms underlying encoding and recognition of memorised and novel sequences, with sustained temporal generalisation in the auditory domain and time-specific responses in the visual domain. Using the BROAD-NESS pipeline, we identified modality-specific and supramodal brain networks. Auditory memory engaged auditory cortex, cingulate gyrus, and hippocampus, whereas visual memory involved orbitofrontal cortex and visual areas. Notably, both modalities recruited a shared network including hippocampus and medial cingulate during recognition. These findings provide compelling evidence for distinct and shared predictive learning mechanisms across sensory systems, advancing our understanding of how the brain integrates and evaluates temporally structured, Gestalt-like information.

Article activity feed