Automated Detection of Quiet Eye Durations in Archery Using Electrooculography and Comparative Deep Learning Models
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study presents a deep learning-based approach for the automated detection of Quiet Eye (QE) durations from electrooculography (EOG) signals in archery. QE—the final fixation or tracking of the gaze before executing a motor action—is a critical factor in precision sports. Traditional detection methods, which rely on expert evaluations, are inherently subjective, time-consuming, and inconsistent. To overcome these limitations, EOG data were collected from 10 licensed archers during controlled shooting sessions and preprocessed using a wavelet transform and a Butterworth bandpass filter for noise reduction. We implemented and compared five deep learning models—CNN + LSTM, CNN + GRU, Transformer, UNet, and 1D CNN—for QE detection. The CNN + LSTM model achieved the highest accuracy (95%), followed closely by CNN + GRU (93%), demonstrating superior performance in capturing both spatial and temporal dependencies in the EOG signals. Although Transformer-based and UNet models performed competitively, they exhibited lower precision in distinguishing QE periods. These results indicate that deep learning provides an effective and scalable solution for objective QE analysis, substantially reducing the dependence on expert annotations. This automated approach can enhance sports training by offering real-time, data-driven feedback to athletes and coaches. Furthermore, the methodology holds promise for broader applications in cognitive and motor skill assessments across various domains. Future work will focus on expanding the dataset, enabling real-time deployment, and evaluating model generalizability across different skill levels and sports disciplines.