Multimodal Classification of Cognitive Workload Using Eye-Tracking, ECG, and Head Motion Data in Simulated Military Missions

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Accurately assessing cognitive workload is critical in military operations, where decisions must be made under pressure in complex and dynamic environments. This study presents multimodal machine learning approaches for classifying workload into three levels: low, moderate, and high. Synchronized electrocardiogram (ECG), eye-tracking, and head movement signals from inertial measurement units were collected across 26 simulated missions involving autonomous technologies. High workload segments were annotated by experts based on task demands and performance. Physiological and behavioral features; including heart rate, heart rate variability, pupil diameter, fixation count, and blink rate, were extracted and normalized per participant to account for individual variability. Classification models were evaluated using subject-independent five-fold cross-validation to ensure generalization. Among the tested models, XGBoost achieved the highest performance, with an accuracy of 0.86 and a macro averaged F1 score of 0.78, outperforming Random Forest (accuracy: 0.82, F1: 0.73) and Decision Tree (accuracy: 0.74, F1: 0.65). Feature importance analysis revealed pupil size and fixation dispersion as key predictors of cognitive workload. These findings demonstrate the feasibility of real-time, noninvasive cognitive workload monitoring using multimodal physiological signals and support the development of adaptive human-machine systems that dynamically respond to operator cognitive states in high-demand environments.

Article activity feed