Teaching with Generative AI: From Policing Use to Pedagogical Partnership

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Generative artificial intelligence (AI) is now embedded in students’ everyday academic practice, yet institutional responses continue to emphasise suspicion, detection, and control. This paper argues that large language models (LLMs) can be integrated into teaching not as threats to integrity but as catalysts for epistemic agency. I outline a teaching intervention in which students conduct both human and AI-assisted (ChatGPT) reflexive thematic analyses of qualitative data, followed by a critical evaluation of the model’s interpretive claims. By working with and against ChatGPT’s limitations, students clarify what human interpretation requires: theoretical grounding, contextual sensitivity, and reflexive judgement. Through the comparison, they develop critical AI literacy, articulate the nature of reflexivity, and distinguish pattern recognition from meaning-making. This “pedagogical mirror effect” demonstrates how LLMs can be used to teach thinking rather than replace it, and supports a shift from compliance-oriented governance toward pedagogies of human–AI co-agency.

Article activity feed