Redistributing Epistemic Labor: Prior Knowledge Shapes How Effectively Students Use Large Language Models

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Medical education prepares students to reason under uncertainty, critically evaluate evidence, and make accountable clinical judgements, epistemic demands that define professional competence. Large language models (LLMs) are now entering this field, shifting the responsibility for seeking and synthesizing information from learners to tools. Whether this redistribution of epistemic labor supports or undermines the development of clinical reasoning depends on what learners already know, we argue. To test this theory, we conducted an experiment in which medical and social science students were asked to complete a critical reasoning task on an unsettled medical issue, the safety of nanoparticle-based sunscreen, using either an LLM (ChatGPT-4) or a traditional search engine (Google). Across both groups, LLM users reported significantly lower cognitive load, confirming that AI assistance reduces perceived effort regardless of expertise. However, the effect on justification quality was moderated by domain knowledge: medical students produced stronger justifications when using the LLM, whereas social science students produced stronger justifications when using the search engine. These findings suggest that LLMs do not uniformly support or hinder epistemic performance, their effect depends on the prior knowledge that learners bring to the task. For medical education, this has direct implications: integrating LLMs into curricula requires not only technical access, but also the deliberate preparation of students to exercise epistemic responsibility, evaluating, challenging and taking ownership of AI-generated information, rather than deferring to it.

Article activity feed