Towards the study of perceptual decision-making in peripheral vision with virtual reality

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Military personnel are often required to make perceptual decisions based on noisy and uncertain information. Over the last decade, work from experiments using simple stimuli in visual psychophysical paradigms has provided evidence that in the visual periphery, observers are often prone to false alarms and variable confidence judgments in detection tasks. To better understand whether these characteristics extend to more naturalistic paradigms, our research group has developed visual detection experiments in virtual reality, where on each trial, observers are required to first fixate on specific objects, and then make judgments about whether a target (i.e., a green soldier in uniform) is present, and rate confidence in this decision. Inevitably, extending the study of peripheral vision to virtual reality involves trade-offs, with tension between the degree of experimental control that is employed and the ecological validity of the task. In this piece, we focus on the challenges that emerged and the lessons we have learned in pursuit of designs that strike a balance between these factors. In our first experiment, we required participants to ride in an automatically-controlled vehicle, fixate on each red stop sign, and determine whether targets were present in the periphery while looking at the stop sign. Results indicated that observers (1) did not successfully fulfill fixation requirements in the task in all trials, and (2) requiring targets to appear and disappear based on whether a fixation requirement was met resulted in flickering that served as an unintentional, salient cue. In our second experiment, we allowed participants to navigate the vehicle themselves, while again requiring the same fixation requirement and perceptual judgments at each intersection. Initial tests of this design found that variability in how participants navigated in the vehicles sometimes resulted in stopping at locations that did not allow for viewing the stop sign and soldier at the same moment. Finally, in our third experiment, we manipulated the presence of the targets on each trial, and expanded our confidence questions to include both target-present and target-absent trials. Results revealed that confidence was similar in both target-present and target-absent conditions. Based on these preliminary findings, we outline design-related recommendations for studies of the periphery, and explain the tradeoffs that extend across several military-relevant tasks, including both target detection and visual search. Finally, we also share our code to facilitate eye tracking in the HTC Vive Pro, as well as our target detection tasks.

Article activity feed