A Pedagogical Framework for Integrating Large Language Models into Biomedical Education
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) with 'reasoning' capabilities perform at the level of PhD scientists, yet most biomedical education programs lack coherent responses. We introduce a framework to integrate reasoning LLM use into biomedical education comprising Curricular realignment, Cognitive assurance, and Elevated expectations ('RAE Framework', indexing the operative concept in each pillar: Realignment, Assurance, Expectations). Curricular realignment establishes working familiarity with LLMs through targeted training that emphasizes verification, disclosure, and risk mitigation. Cognitive assurance verifies that cognitive engagement occurred through oral examination for summative assessments and proof-of-work documentation for formative assignments, addressing the invalidation of unsupervised evaluations when LLM use cannot be reliably detected. Elevated expectations calibrate assignments against expert-generated LLM baselines, distinguishing model-strong competencies, where trainees must outperform models, from model-weak competencies where trainees must supply capabilities models lack. When paired with training in responsible and effective LLM usage, the framework is designed for near-term implementation in programs with existing oral examination infrastructure and the capacity to triage existing assignments based on LLM completion difficulty. We acknowledge an existential vulnerability if LLMs unambiguously surpass human capabilities, though orchestration speed, regulatory-constrained contexts, and generative inquiry may remain durable assessment dimensions. Overall, the RAE Framework can prepare graduates for success in a world where LLM ubiquity is fundamentally reshaping knowledge work.