Predicting Driving State, Intent to Merge and Working Memory Load from Eye-Tracking Data Collected in a Driving Simulator Study

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Eye tracking has gained renewed attention with camera-based systems and machine-learning methods that enable non-invasive inference of cognitive state. This thesis explores whether eye-tracking features can predict (i) driving state, (ii) intent to merge, and (iii) working-memory load in a virtual driving environment. Participants drove through unsignaled intersections with oncoming traffic while performing an auditory n-back task that elicited low or high working-memory load.We demonstrate robust cross-subject classification of driving state and significant classifier performance for most participants for both intent to merge and working-memory load. Methodologically, we introduce a bivariate Gaussian density feature of horizontal and vertical gaze that improves driving-state and intent-to-merge classification. We also show that pupillometric features—especially differences in pupil-size variability between fixations and saccades—aid working-memory classification in this scenario, and we confirm the utility of median/mean pupil size reported in prior experimental work.Overall, our findings support growing evidence that eye-tracking signals can inform an operator’s cognitive state and may translate to real-world assistance systems. However, because a virtual environment cannot capture all real-world confounds, these results should be viewed as evidence of potential rather than proof of real-world utility.

Article activity feed