Catching up with iCatcher: Comparing analyses of infant eye-tracking based on trained human coders and iCatcher+ automated gaze coding software

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Eye-tracking measures, which provide crucial insight into processes underlying human language cognition, perception and social behavior, are particularly important in research with preverbal infants. Until recently, infant eye gaze analysis required either expensive corneal-reflection eye-tracking technology or labor-intensive manual annotation (coding). Fortunately, iCatcher+, a recently developed AI-based automated gaze annotation tool, promises to reduce these expenses. To adopt this tool as a mainstream tool for gaze annotation, it is key to determine how annotations produced by iCatcher+ compare to the annotations produced by trained human coders. Here, we provide such a comparison, using 288 videos from a word-learning experiment with 12-month-olds. We evaluate the agreement between these two annotation systems and the effects identified using each system. We find that (1) agreement between human-coded and iCatcher+-annotated video-data is excellent (88%) and comparable to intercoder agreement among human coders (90%), and (2) both annotation systems yield the same patterns of effects. This provides strong assurances that iCatcher+ is a viable alternative to manual annotation of infant gaze, one that holds promise for increasing efficiency, reducing the costs, and broadening the empirical base in infant eye-tracking.

Article activity feed