Physician- versus Large Language Model-Generated Summaries in the Emergency Department

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

High-quality one-liner summaries are essential in the emergency department (ED) to support rapid decision-making, but generating them is cognitively demanding and adds to documentation burden; large language models (LLMs) may help by synthesizing longitudinal electronic health record (EHR) data into concise, clinically useful summaries. In this blinded, within-subject study of 99 ED encounters from adult patients at the University of California, San Francisco with prior inpatient admissions (March 2022–March 2024), 26 ED physicians (14 attendings, 12 residents) evaluated paired LLM- and physician-generated summaries in randomized order, rating each on accuracy, completeness, and clinical utility using 5-point Likert scales and indicating their preferred summary with optional free-text explanations. LLM-generated summaries were preferred in 50.5% of encounters, physician summaries in 38.4%, and 11.1% were ties; compared with physician summaries, LLM summaries had higher mean (SD) scores for accuracy (4.27 [0.98] vs 3.49 [1.43]; P < .001), completeness (3.72 [1.08] vs 3.28 [1.25]; P = .006), and clinical utility (3.95 [1.20] vs 3.28 [1.51]; P < .001). Qualitative feedback suggested LLMs tended to produce more inclusive and neutrally phrased summaries, while physicians offered richer nuance but sometimes omitted key details. These findings suggest that domain-adapted LLM-generated one-liners can outperform physician-authored summaries on multiple quality dimensions and, with clinician oversight, may aid rapid synthesis of complex EHR data in high-stakes settings.

Article activity feed