Machine learning-enhanced behavioural approach to detecting reactions to sound in infants and toddlers: proof-of-concept study

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Objective

Show that a basic unsupervised machine learning (ML) algorithm can give information on the direction of child and infant reactions to sound using non-identifiable video-recorded facial features.

Design

Infants and toddlers were presented warble tones or single-syllable utterances 45 degrees to the left or right. A camera recorded their reactions, from which features like head turns or eye gaze were extracted with OpenFace. Three clusters were formed using Expectation Maximization on 80% of the toddler data. The remaining 20% and all infant data were used to verify if the clusters represent groups for sound presentations to the left, to the right, and both directions.

Study Sample

28 infants (2-5 months) and 30 toddlers (2-4 years), born preterm (<32 weeks gestational age) were presented ten sounds each.

Results

The largest cluster comprised 90% of the trials with sound presentations in both directions, indicating “no decision.” The remaining two clusters could be interpreted to represent reactions to the left and the right, respectively, and average sensitivities of 96% for the toddlers and 68% for the infants.

Conclusions

A simple machine learning algorithm demonstrated that it can form correct decisions on the direction of sound presentation using non-identifiable facial behavioural data.

Article activity feed