OpenPFx: Evaluating the Ability of LLMs to Create Patient-Friendly Explanations of Radiological Incidental Findings

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The 21st Century Cures Act mandates patient access to electronic health information, yet radiology reports often remain inaccessible due to specialized terminology and widespread low health literacy. This study evaluates large language model (LLM)–based workflows for generating patient-friendly explanations (PFx) of incidental MRI findings. Four approaches—zero-shot, few-shot, multiple few-shot, and agentic—were benchmarked using ICD-10 code alignment for accuracy and Flesch Reading Ease scores for readability. Across 407 outputs per workflow, the agentic method demonstrated the strongest overall performance, achieving a sixth-grade reading level and the highest accuracy. Compared with prior work limited by small sample sizes or suboptimal readability, these results indicate that structured, agent-based LLM workflows can improve both clarity and diagnostic consistency at scale. By translating complex radiology findings into accessible language, AI-generated PFx provide a scalable strategy to reduce health literacy disparities and advance the Cures Act’s goal of making medical data both transparent and usable for patients.

Article activity feed