Hemifield-Specific Motion Extrapolation Reveals Limits of Interhemispheric Integration

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The accurate perception of moving objects is a fundamental challenge for the visual system, which must compensate for neural processing delays. Motion extrapolation is a proposed mechanism whereby the brain predicts an object’s future position. We investigated how an object’s motion history shapes its perceived position using the flash-jump illusion, in which a brief color change in a moving bar is mislocalized as further along the direction of motion. Across two experiments, we found that longer preceding motion sequences improved localization accuracy, consistent with motion adaptation. This effect occurred regardless of whether motion continued after the flash. Notably, mislocalization transiently reappeared as the object crossed the vertical midline, suggesting that motion adaptation and motion extrapolation operate independently between within each hemifield. Manipulating the length of the sequence in each hemifield in Experiment 2 confirmed that this adaptation is spatially confined to each hemifield, with limited interhemispheric transfer. The results align with a Bayesian framework in which the brain integrates signals from both hemispheres, with midline crossings triggering a shift from adapted to unadapted neural populations. We identify motion extrapolation, supported by hemifield-specific adaptation in area MT and integration in area MST, as the mechanism behind these midline discontinuities. This work reframes smooth pursuit not just as a tracking behaviour, but as a functional solution to overcome the inherent limits of interhemispheric motion processing.

Article activity feed