Spatial proximity and scene grammar: Shaping spatial representations for memory-guided actions in naturalistic environments

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Spatial representations of the objects in our environments are necessary for effective goal-directed actions. These representations can be produced with respect to the self (i.e., egocentrically) and/or to surrounding objects (i.e., allocentrically). For the latter, one (spatial) factor of influence is landmark proximity to target. This factor shapes allocentric representations and informs future memory-guided actions, in both impoverished and rich scenes. More recent work has suggested that cognitive factors (e.g., object semantics) can also influence allocentric coding. While these factors have been shown to separately affect allocentric coding, here we investigated how both spatial and cognitive factors affect allocentric coding in naturalistic environments. In a memory-guided virtual reality (VR) experiment, semantically-related targets (i.e., local objects, which are movable and manipulable, such as a cup) were presented on congruent or incongruent anchors (i.e., large, immovable objects, predictive of local object locations). After an imperceptible anchor shift (or not), a target had to be placed in its remembered position. Placement performance results indicated that anchors were, indeed, used to build up spatial target representations. Additional placement behaviour analysis further revealed that proximity-to-anchor shaped spatial coding, with similar effects across semantic anchor modulations (i.e., semantically congruent and incongruent anchor shifts). Taken together, we found that spatial target representations are differentially sensitive to contextual factors (i.e., spatial vs. cognitive).

Article activity feed