Embodied Navigation: whole-body movement drives path integration in large-scale free-walking virtual reality
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Human navigation relies on combining body cues (vestibular and proprioceptive signals) with visual cues such as optic flow. The weighting and integration of these signals during the continuous tracking of walked distances and angles, known as path integration, remain poorly understood. Previous path integration studies have been limited to small spaces (< 150 m 2 ) and the influence of complex environments on cue weighting of body and vision cues is still unclear. Here, we conducted the largest-environment free-walking virtual reality navigation study to date in a 45 × 25 m facility (1, 215 m 2 ) using triangle completion tasks in naturalistic environments. We systematically manipulated sensory input across three conditions: natural active walking (full sensory integration), active joystick control (visual cues only), and blindfolded active walking (body cues only). Participants navigated through both sparse fallow land without trees and more complex forest environments with 400 trees. We embedded performance data in a Bayesian cue combination model to analyse the underlying combination mechanism. Our results provide evidence that most participants substantially favour body cues over visual cues in a non-Bayesian combination process, with considerable inter-individual variance in cue dominance strength and side biases. While transitioning from fallow land to forests reduced directional variance, weighting of body and visual information remained constant. These findings advance our understanding of human spatial navigation by demonstrating that body-based cues dominate path integration even in visually rich, large-scale environments, challenging assumptions about optimal Bayesian cue integration in human navigation.