A neurocomputational framework of cross-context generalization: dynamic representational geometry in high-level visual cortex and dual coding in vmPFC
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The ability to generalize learned knowledge across contexts is crucial to human cognition, yet the underlying neural mechanisms remain unclear. Here, we employed a combination of functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and artificial neural network (ANN) modeling to investigate how the ventral temporal cortex (VTC) supports context-dependent value inference. Our results reveal that VTC facilitates value inference through two distinct mechanisms: conveying object identity information to indirectly support value computation in downstream regions, and dynamically adjusting object representational geometry via amplitude- and phase-based modulation of voxel preference features to directly engage in value computation. Crucially, the ventromedial prefrontal cortex (vmPFC) acts as an integrative hub, establishing dual representations for flexible behavior that simultaneously encode: (a) abstract value, enabling cross-context generalization through parallel structure, and (b) modulated VTC representation, enabling context-specific value inferences. Overall, our results advance a novel neurocomputational framework explaining how adaptive sensory processing mechanisms support generalization across contexts.