Neurocognitive dynamics of translating information from a spatial map into action

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

How do we translate information from a spatial map to action in our immediate surroundings? Despite the widespread use of various tools for orientation, from paper maps to GPS, this fundamental question remains unanswered in our understanding of human spatial navigation. To investigate this, we implemented a perspective-taking task in immersive virtual reality combined with mobile EEG, aiming to disentangle the neurocognitive processes involved. Thirty-eight young adults were presented with a virtual 2D map in which we manipulated both the perspective shift and the physical angle of rotation required to align with a target, as well as the congruency between these two variables. Behaviourally, angular error during pointing increased slightly and linearly with perspective shift. However, the relationship between rotation angle and accuracy revealed a non-linear pattern, with better performance around the antero-posterior bodily-axis. Regarding congruency, angular error increased for incongruent trials, but only when the perspective-taking angle exceeded 90°. At the neural level, activity in the retrosplenial complex (RSC) revealed a sequential organization with alpha-band modulation during perspective shift, followed by beta-band activity reflecting preparation for the required physical rotation. In addition, incongruency between perspective-taking and physical rotation increased beta activity in the left temporo-parietal junction (lTPJ). Overall, these findings demonstrate the value of immersive virtual environments to investigate the neural correlates of real-world navigation and the complexity of perspective-taking mechanisms.

Article activity feed