Pedagogy in the speech-gesture couplings of caregivers: evidence from corpus-based analysis

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In face-to-face communication, representational gestures—imagistically evoking properties of referents—have been shown to support both language comprehension and development. In adult-adult interaction, representational gestures are systematically produced before the words they are semantically related to and therefore can be used to predict upcoming words by addressees. However, gestures cannot support predictions for words unfamiliar to the addressee and therefore being produced before the semantically associated words may be less useful for young children. Nothing is known yet about whether the same temporal speech-gesture relationship holds for caregivers talking to their children. We annotated representational gestures from a large corpus (ECOLANG) of semi-naturalistic conversations between caregivers and their 3-4-year-old children (n = 929 gestures from n = 38 caregivers). We found a more variable relationship between the timing of the gesture and the speech. Specifically, for words more frequently used in language to young children or recently mentioned, gesture strokes (the meaningful part of a gesture) tended to be produced before the word’s onset; for rarely used words and words not mentioned recently, the stroke tended to be produced at the same time or after the onset of the word. Thus, caregivers are sensitive to the level of familiarity of the words for their child and dynamically adjust the timing of their gestures to likely serve different functions: supporting online prediction of words familiar to the child and providing semantic enrichment for words less familiar to them.

Article activity feed