Modeling Children's Grammar Learning via Caregiver Feedback in Natural Conversations

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Many debates in the language acquisition literature have revolved around the role of negative evidence for the acquisition of grammar.The scientific study of this question has not been settled with traditional research methods, given that it requires handling children’s natural social interaction while controlling for the specific role of error-contingent feedback, independent of other types of input. Here, we leveraged computational modeling to test whether there are learning gains in grammar induced by caregivers' feedback above and beyond learning from input alone.More specifically, we compared language models trained on large corpora of child-directed language to the same models that were additionally fine-tuned through reinforcement learning using a reward model trained to provide caregiver-like feedback.Focusing on clarification requests, we found that fine-tuned models produced more grammatical utterances than baseline models. However, performance on challenging benchmarks of grammar knowledge evaluation did not improve. We showed that these benchmarks could, in principle, be improved through integration of other types of feedback.The broad impact of the current work is introducing a methodological framework which enables scientists to test many types of feedback, including signals beyond the verbal modality, leading to a more comprehensive evaluation of caregiver feedback in language development.

Article activity feed