Bayesian causal inference unifies perceptual and neuronal processing of center-surround motion in area MT
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Center-surround (CS) processing is a ubiquitous computational principle in the brain, operating across levels from early sensory encoding to abstract conceptual representation and shaping both behavior and neural responses. However, current models of these neural interactions, typically based on divisive normalization (DN) or descriptive frameworks, fail to account for the wide range of effects documented for CS motion alone. Here, we propose that both neural and perceptual phenomena related to center-surround motion emerge from a single normative principle: Bayesian causal inference over the latent structure of motion. We derived neural predictions from a causal inference model of perception that dynamically infers the most probable reference frame for moving stimuli. Specifically, we generated predictions for both the mean response and variability of single neurons across the full 4D space of center and surround motion (i.e., directions and speeds). These predictions exhibit complex interactions that are unexplainable by classic DN models but qualitatively match a wide range of neural data from the primate middle temporal area (MT). Our model shows how a heterogeneous population of MT neurons – variously encoding motion in retinal coordinates or relative to an inferred reference frame – can arise from this single inferential process. Our results resolve long-standing puzzles, such as when and why surround stimuli suppress versus facilitate neural responses, and why neural data can alternately support or challenge relative motion coding. By unifying disparate neural observations with perception, our work suggests new, theory-driven experiments to probe the circuit-level implementation of these computations.