Evaluating Large Language Models for Colonoscopy Preparation Assistance: Correctness and Diversity in Synthetic Dialogues
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background
Colorectal cancer is the third leading cause of cancer-related deaths in the United States, and colonoscopy remains the gold standard for early detection and prevention. However, many procedures are postponed due to inadequate bowel preparation, a preventable failure often caused by patients’ difficulty in understanding or following written prep instructions. Prior interventions such as reminder apps and instructional videos have improved adherence only modestly, largely because they cannot answer patients’ specific questions. Recent advances in large language models (LLMs) raise the possibility of developing conversational assistants that can provide an interactive support to patients in procedure preparation.
Objective
This study evaluated correctness and diversity of synthetic dialogues generated by leading LLMs acting as both simulated AI Coaches and patients for colonoscopy preparation.
Methods
Four leading LLMs, OpenAI’s o3 and GPT-4.1, Meta’s Llama 3.3 70B, and Mistral’s Large-2411 were used to generate 250 patient-AI Coach dialogues per model. Prompts were designed to elicit diverse patient questions about diet, medications, and other prep-related topics. Human raters, including medical experts, evaluated responses for correctness, error type, and potential harmfulness. Automatic evaluation using an LLM-as-a-judge approach complemented human evaluation.
Results
Leading models approached but did not achieve adequate performance. Closed-weight models (GPT-4.1, o3) outperformed open-weight models (Llama, Mistral) on correctness, while multi-prompt generation substantially improved question diversity. All models produced harmful errors, primarily due to omissions or misinterpretations of prep instructions.
Conclusions
While LLMs demonstrate strong potential for colonoscopy preparation support, none are yet reliable enough for unsupervised deployment in patient-facing contexts without effective safety layers.