The effect of virtual reality modality level of immersion and locomotion on spatial learning and gaze measures
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The widespread adoption of head-mounted display (HMD) virtual reality (VR) systems has emerged in various fields, including spatial learning research. This study investigated the effects of VR modality level of immersion, locomotion interface, and proprioception on spatial learning and physiological measures using eye-tracking (ET) in VR. We translated the classic T-maze task from Barnes et al. (1980) to humans for the first time, comparing three VR modalities: 3D HMD VR with physical walking, 3D HMD VR with controller-based movement, and 2D desktop VR. Results revealed that human participants employed a mixture of cue, place, and response strategies when navigating the virtual T-maze, mirroring rodent behavior. In both samples, no significant differences were found between the two HMD VR conditions in learning performance, nor consistent ones in strategy choices. However, 2D desktop navigation was associated with slower initial learning, though this discrepancy diminished in subsequent sessions. These results were supported by spatial presence, immersion, and naturalness reports. Gaze measures showed that participants who physically walked devoted more visual attention to environmental cues compared to controller users. Predictive models for identifying spatial learning strategies based on ET and behavioral measures demonstrated significant accuracy in some models, particularly in the VR walking condition and second session. Our findings enhance the understanding of spatial learning strategies and the effects of VR modality on cognition and gaze behavior. This work demonstrates the potential of integrated ET data and holds implications for early detection and personalized rehabilitation of neurodegenerative conditions related to spatial cognition.