A statistical framework for evaluating repeatability and reproducibility of large language models in diagnostic reasoning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

A major concern in applying large language models (LLMs) to medicine is their output variability, as they can generate different responses even when the input prompt, model architecture, and parameters remain the same. In this study, we present a statistical framework for evaluating LLM consistency by quantifying the repeatability and reproducibility of outputs in diagnostic reasoning. The framework captures both semantic variability, reflecting differences in the clinical meaning of outputs, and internal variability, reflecting differences in token-level generation behavior. These dimensions are critical in medicine, where subtle shifts in meaning or model reasoning may influence clinician interpretation and decision-making. We apply the framework across multiple LLMs using validated diagnostic prompts on standardized medical exam vignettes and real-world rare disease cases from the Undiagnosed Diseases Network. We find that LLM consistency depends on the model, prompt, and complexity of the patient case, and is generally not correlated with diagnostic accuracy. This highlights the need for case-by-case assessment of output consistency to ensure reliability in clinical applications. Our framework can support model and prompt selection to promote more reliable use of LLMs in medicine.

Article activity feed