Spatiotemporal representations of contextual associations for real-world objects

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In the real world, objects always appear in context. Many objects are reliably associated with a certain scene context (e.g., pots appear in kitchens) and other objects that appear in the same context (e.g., pans appear together with pots). Previous neuroimaging work suggests that such contextual associations shape the neural representation of isolated objects even in the absence of the scene context. Yet, three key questions remain unanswered: (1) How do representations of contextual associations relate to perceptual and categorical representation in visual cortex, (2) how do they emerge across time, and (3) how are they mechanistically implemented? To answer these questions, we recorded fMRI and EEG while participants (human, both sexes) viewed isolated objects stemming from two scene contexts. Multivariate pattern analysis on the neural data revealed that objects from the same context were coded more similarly than objects from different contexts in object-selective LOC and scene-selective PPA, even when systematically controlling for perceptual and categorical similarities. Such contextual relation representations emerged relatively late during visual processing (i.e., after perceptual and categorical representations), specifically in the anterior PPA, and likely through a mixture of object-to-object and object-to-scene associations. Together, our results demonstrate that contextual relation representations emerged for isolated objects, and without a task that encourages their formation, suggesting that objects automatically activate context frames that support visual cognition in real-world environments.

Article activity feed