Improving Doctor-Patient Communication Using Large Language Models - Results from an Experimental Study

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Importance

Medical jargon poses significant barriers to patient comprehension of healthcare information, potentially affecting treatment adherence and health outcomes.

Objective

To investigate whether large language models (LLMs) can improve patient understanding of medical notes by translating complex medical terminology into comprehensible lay language.

Design, Setting, and Participants

This experimental online study was conducted between August 27 and October 8 2024 using a within-subjects design. Participants were recruited from a university population and included 63 adults aged 19-30 years (52 female [82.5%]; mean age 21.9 years), who completed the full study protocol.

Interventions

Four fictional medical notes representing typical neurological cases (stroke, neuromyelitis optica, meningitis, subarachnoid hemorrhage) were created by neurologists and translated into lay language using GPT-4. Each participant viewed two original and two translated notes in counterbalanced order.

Main Outcomes and Measures

Primary outcomes included objective comprehension measured through content-related questions, self-paced reading times and subjective ratings of understanding, difficulty, empathy, and mental effort on 5-point Likert scales.

Results

The translated notes demonstrated significant improvements across all measured dimensions, except for average reading times. Participants achieved higher comprehension scores (effect size details to be added), reported greater subjective understanding, perceived higher empathy, and experienced reduced mental effort and negative emotions when reading translated versus original medical notes.

Conclusions and Relevance

LLM translation of medical notes significantly improved both objective and subjective patient understanding. From a psychological perspective, these results align with predictions from cognitive load theory, emphasising the importance of adapting language complexity to reduce processing demands in communication between laypeople and experts. These findings suggest potential for integrating AI-assisted communication tools in clinical practice to enhance patient comprehension and engagement, though implementation considerations including accuracy validation and clinician workflow integration require further investigation.

Article activity feed