AI-enhanced behavioral approach to measuring hearing in infants and toddlers: Proof-of-Concept Study
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Objective
Show that a basic unsupervised machine-learning algorithm can give information on whether a child reacted to a sound using facial, non-identifyable features recorded with a camera.
Design
Infants and toddlers were presented warble tones or single-syllable utterances 45 degrees to the left or to the right. A camera recorded their reactions, from which features like head turns or eye gaze were extracted with OpenFace. Three clusters were formed using Expectation Maximization on 80% of the toddler data. The remaining 20% and the infant data were used to verify if the clusters represent groups for sound presentations to the left, to the right and both directions.
Study Sample
28 infants (2-5 months) and 30 toddlers (2-4 years) were presented ten sounds each.
Results
The largest cluster comprised 90% of the trials with sound presentations in both directions, indicating “no decision”. The remaining two clusters could be interpreted to represent reactions to the left and the right, respectively, and average sensitivities of 96% for the toddlers and 68% for the infants.
Conclusions
A simple machine-learning algorithm demonstrated that it can form correct decisions on the direction of sound presentation by using anonymous facial data.