Test Tube: Exploring sensorimotor efficiency of aiming movements in virtual environments
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Performing movements in simulated environments (e.g., virtual reality) can be advantageous in training scenarios and when experimentally separating visual and proprioceptive limb representations. However, compared to real physical environments, virtual environments differ in how the limb is represented (e.g., animated hand or cursor) and in the timing of the visual feedback (i.e., latency in rendering the tracked movement). Thus, the current study sought to explore the influence of both representation differences and additional visual feedback delays on sensorimotor control when moving in a virtual reality environment. Participants performed aiming movements in an environment in which they had vision of a cursor representing their index finger or vision of their actual upper-limb. Further, vision was available either only prior to movement onset (i.e., feedforward) or throughout the movement (i.e., online guidance). In the feedforward condition, only representation differences were present between effectors. In the online guidance condition, however, there was an additional feedback delay when aiming with the cursor (at least ~17 ms). Performance was quantified using efficiency (bits/second) calculated using amplitude, effective target width, and movement time. Movements were less efficient when aiming with vision of the cursor compared to vision of the upper-limb in both the feedforward and online guidance conditions. Further, the magnitude of the cursor-limb efficiency difference was larger in the online guidance condition compared to the feedforward condition. Overall, the results suggest that differences in representation and visual feedback delays can combine to produce less efficient movements when aiming in virtual compared to real environments.