Examining spoken language input to infants with cochlear implants

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Purpose: To compare spoken language input to young deaf/hard of hearing (DHH) children with cochlear implants and matched chronological- and hearing-age hearing controls. Method: We used long-form audio-recordings (M=14.3hrs) from 48 6-32mo’s (16/group). We manually transcribed 40min/recording, and ran automated LENA algorithms over the full daylong recording. We computed 10 automated and manually annotated metrics of input quantity, complexity, and conceptual content. We also computed speech outcomes, and linked the input metrics to those outcomes. Results: There were no significant cross-group differences in input quantity, or in any input metrics between DHH and chronological age-matches. The DHH group heard significantly shorter sentences and more highly auditory words than hearing-age matches. While they produced more (and more mature) vocalizations than hearing-age matches, they produced fewer mature vocalizations than age-matched peers. DHH vocalizations also increased less robustly with age. In regression models, only hearing status explained variance in child vocalizations for DHH and hearing-age matches. For DHH and same age matches, age, hearing status (hearing>DHH), input quantity, and shorter MLU input collectively predicted >50% of the variance in children’s vocal maturity. Conclusions: DHH children and hearing children differed little in their language input. Differences from hearing-age controls are likely explained by their younger age. Nevertheless, we find lower rates of child vocalizations in the DHH group and a weaker increase over age. This extends prior findings through its in-depth look at a young cohort using both automated and manual measures of speech.

Article activity feed