Eye-movement benchmark data for smooth-pursuit classification

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Analysis of eye-tracking data often requires accurate classification of eye-movement events. Human experts and classification algorithms often confuse fixations (fixating stationary targets) and smooth pursuits (fixating moving targets) because their feature characteristics overlap. To foster the development of better classification algorithms, we created a benchmark data set that does not rely on human annotation as the gold standard. It consists of almost four hours of eye-movements. Ten participants fixated different targets designed to induce saccades, fixations, and smooth pursuits. Ground truth was established by designing stimuli that prevent fixations and smooth pursuits to co-occur, and separating them from saccades by their velocity. Here we make available both the raw data and offer a convenient way for preprocessing and assigning ground truth labels in the form of a companion package in Python. We encourage researchers to utilize them for feature engineering, and to train, validate, and benchmark their algorithms.

Article activity feed