Cross-Domain Evaluation and Fine-Tuned Adaptation of iCatcher+ for Korean Infant Gaze Data

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study evaluates the cross-domain generalisability of three iCatcher+ gaze classification models on a dataset of Korean infants. Applying the pretrained models to the Korean dataset resulted in a substantial decline in accuracy across all models. Performance was systematically modulated by developmental mismatch: accuracy declined as the age distribution of the target dataset diverged from that of the training data. In contrast, experiment-level factors such as stimulus type and temporal progression had limited effects. Fine-tuning the Lookit model on Korean data improved target-domain performance, but at the cost of reduced accuracy on the original source dataset, consistent with catastrophic forgetting. Our findings highlight that deep learning-based gaze classifiers remain sensitive to domain shift. Although fine-tuning can partially mitigate performance loss, robust generalisation requires closer alignment between training and deployment contexts.

Article activity feed