SCALES-AI: A Supervision- and Context-Aligned Entrustment Framework for Integrating Artificial Intelligence into Emergency Medicine Education

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid entry of artificial intelligence (AI) into emergency medicine (EM) education demands a practical way to decide how much autonomy to grant specific tools for specific educational tasks. We propose SCALES-AI (Supervision- and Context-Aligned Levels of Entrustment for AI in Education), a theory-informed rubric that rates AI tools—rather than trainees or faculty—on an entrustment scale for educational (not clinical) use. Developed through a targeted literature synthesis, a three-workshop multidisciplinary process, and stakeholder refinement, SCALES-AI links four Foundational Principles (human-centered augmentation, evidence-based scalability, context-specific adaptation, and an ethical foundation) to four Dimensions of Trustworthiness (Ability, Integrity, Benevolence, and Equity). These dimensions inform a five-level entrustment scale (0–5), ranging from “not appropriate” to “full autonomy for low-stakes tasks,” with supervision intensity and audit cadence matched to level and re-evaluation triggered by model, prompt, corpus, or guideline changes and by observed drift. We operationalize each dimension with example indicators and illustrate use through concrete scenarios (e.g., documentation feedback, practice-question generation, simulation debrief support, portfolio analytics) mapped to appropriate levels. To support implementation and reproducibility, we provide a SCALES-AI Checklist to document tool-task ratings, supervision and scope controls, and monitoring plans, and a SCALES-AI Inter-Rater Calibration Template to record independent ratings and consensus, enabling local reliability assessment. Although motivated by the high-stakes, time-pressured EM environment, SCALES-AI is generalizable across health professions education. Treating AI-tool trustworthiness as dynamic and context-dependent enables safe, equitable, and auditable adoption while preserving essential human elements of medical training.

Article activity feed