Explain or Diminish: Rethinking Generative AI Through the Lens of Student Motivation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The increasing integration of generative artificial intelligence (AI) tools in educational settings has raised critical concerns regarding their impact on student motivation and engagement. While these tools can offer immediate answers and streamlined learning experiences, their opaque decision-making processes may lead to surface-level understanding, reduced cognitive effort, and diminished intrinsic motivation. Students who rely excessively on generative AI systems may disengage from effortful learning, particularly when they are unable to understand or challenge the underlying reasoning behind AI-generated outputs. In this positional paper, we argue that the principles of explainable AI (XAI) offer a promising pathway to counter these motivational risks. By making AI decisions transparent and understandable, XAI can re-engage students in the learning process, foster a sense of epistemic agency, and promote deeper cognitive involvement. In doing so, XAI aligns with established motivational theories—such as self-determination theory and expectancy-value theory—that emphasize the importance of autonomy, competence, and relevance in sustaining student motivation. These insights provide actionable guidance for educators and AI designers seeking to foster more meaningful engagement through technology-enhanced learning environments.