Dynamic context-based updating of object representations in visual cortex
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In everyday vision, objects in scenes are often poorly or only partially visible, for example because they are occluded or appear in the periphery. Previous studies have shown that the visual system can reconstruct missing object information based on the spatial context in static displays. Real-world vision is dynamic, however, causing the visual appearance of objects (e.g., their size and viewpoint) to change as we move. Importantly, these changes are highly predictable from the 3D structure of the surrounding scene, raising the possibility that the visual cortex dynamically updates object representations using this predictive contextual information. Here, we tested this hypothesis in two fMRI studies (N=65). Experiment 1 showed that visual representations of objects were sharpened when they rotated congruently (rather than incongruently) with the surrounding scene. Moreover, Experiment 2 showed that the updated orientation of the object (as dictated by the surrounding scene) could be decoded from visual cortex activity, even when the object itself was not visible. These findings indicate that predictive processes in the visual cortex follow the geometric structure of the environment, thus providing a mechanism that leverages predictions to aid object perception in dynamic real-world environments.