Scaffolding Students-AI Dialogue: A Framework for Safe Educational Interactions

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Adolescents are the fastest and largest age group adopting the use of Large Language Models (LLMs), models that were not specifically developed with their educational, emotional, and developmental needs in mind. Safe, adapted, and appropriate LLMs are essential for AI literacy, as well as for the safe and beneficial use in educational settings and beyond. In this paper, we present Steered Contextual AI Framework for Orchestrating Learning Dialogue (SCAFFOLD), a layered reliability framework that surrounds text/speech generation with external verification, targeted repair, and safe fallback that could work with any LLM while enabling data privacy. Grounded in pedagogical and developmental science of learning, in particular in the fact that learning requires effort, SCAFFOLD is designed to preserve productive effort, supporting active human-AI collaboration. SCAFFOLD starts by designing the interaction for a given context and goal, and then defines guidance and checks to be implemented. It distinguishes deterministic checks from probabilistic checks. We test SCAFFOLD in a classroom deployment with 12–16-year-old students using a LLM-powered social robot in a multi-user interaction of co-creation of a mnemonic on a topic covered previously. Students first interacted with a prompt-only LLM and then with SCAFFOLD during a learning intervention. Our results show more activity, engagement and on-topic talk with SCAFFOLD compared to the prompt-only LLM. Additionally, students’ co-creation efforts with SCAFFOLD predicted their post-test knowledge scores, showing potential of the framework to support engagement, social interactions, and learning.

Article activity feed