Physician- and Large Language Model-Generated Hospital Discharge Summaries: A Blinded, Comparative Quality and Safety Study

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Importance

High quality discharge summaries are associated with improved patient outcomes but contribute to clinical documentation burden. Large language models (LLMs) provide an opportunity to support physicians by drafting discharge summary narratives.

Objective

To determine whether LLM-generated discharge summary narratives are of comparable quality and safety to those of physicians.

Design

Cross-sectional study.

Setting

University of California, San Francisco.

Participants

100 randomly selected Inpatient Hospital Medicine encounters of 3-6 days duration between 2019-2022.

Exposure

Blinded evaluation of physician- and LLM-generated narratives was performed in duplicate by 22 attending physician reviewers.

Main Outcomes and Measures

Narratives were reviewed for overall quality, reviewer preference, comprehensiveness, concision, coherence, and three error types – inaccuracies, omissions, and hallucinations. Each error individually, and each narrative overall, were assigned potential harmfulness scores on a 0-7 adapted AHRQ scale.

Results

Across 100 encounters, LLM- and physician-generated narratives were comparable in overall quality on a 1-5 Likert scale (average 3.67 [SD 0.49] vs 3.77 [SD 0.57], p=0.213) and reviewer preference (χ2 = 5.2, p=0.270). LLM-generated narratives were more concise (4.01 [SD 0.37] vs. 3.70 [SD 0.59]; p<0.001) and more coherent (4.16 [SD 0.39] vs. 4.01 [SD 0.53], p=0.019) than their physician-generated counterparts, but less comprehensive (3.72 [SD 0.58] vs. 4.13 [SD 0.58]; p<0.001). LLM-generated narratives contained more unique errors (average 2.91 [SD 2.54] errors per summary) than physician-generated narratives (1.82 [SD 1.94]). Averaged across individual errors, there was no significant difference in the potential for harm between LLM- and physician-generated narratives (1.35 [SD 1.07] vs 1.34 [SD 1.05], p=0.986). Both LLM- and physician-generated narratives had low overall potential for harm (<1 on 0-7 scale), although LLM-generated narratives scored higher than physician narratives (0.84 [SD 0.98] vs 0.36 [SD 0.70], p<0.001).

Conclusions and Relevance

In this cross-sectional study of 100 inpatient Hospital Medicine encounters, LLM-generated discharge summary narratives were of similar quality, and were preferred equally, to those generated by physicians. LLM-generated summaries were more likely to contain errors but had low overall harmfulness scores. Our findings suggest that LLMs could be used to draft discharge summary narratives of comparable quality and safety to those written by physicians.

Key Points

Question

Can large language models (LLMs) draft hospital discharge summary narratives of comparable quality and safety to those written by physicians?

Findings

In this cross-sectional study of 100 discharge summaries, LLM- and physician- generated narratives were rated comparably by blinded reviewers on overall quality and preference. LLM-generated narratives were more concise and coherent than their physician-generated counterparts, but less comprehensive. While LLM-generated narratives were more likely to contain errors, their overall potential for harm was low.

Meaning

These findings suggest the potential for LLMs to aid clinicians by drafting discharge summary narratives.

Article activity feed