Readability, Quality, Understandability, and Actionability of ChatGPT Generated GI Patient Education vs AGA Patient Center

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background and Aims: Patients increasingly use the internet and artificial intelligence chatbots to obtain health information, yet the readability, quality, understandability, and actionability of AI-generated gastrointestinal patient education remain unclear. This study compared gastrointestinal patient education from a professional society website with content generated by ChatGPT using validated health literacy instruments. Methods: In this cross-sectional comparative study, 50 gastrointestinal patient education topics from the American Gastroenterological Association patient information website were paired with ChatGPT-generated responses using standardized prompts. Readability was assessed using the Flesch-Kincaid Grade Level. Quality of treatment information was evaluated using the DISCERN instrument. Understandability and actionability were assessed using the Patient Education Materials Assessment Tool. Paired t tests were used to compare mean scores between sources. Results: Fifty paired topics were analyzed. The mean Flesch-Kincaid Grade Level was higher for ChatGPT than professional society materials (10.33 vs 8.72; mean difference, 1.61; 95% CI, 0.89–2.32; P = .00012). Differences in DISCERN scores (63.52 vs 64.30; mean difference, −0.78; 95% CI, −3.10 to 1.53; P = .49), PEMAT understandability (87.91% vs 86.52%; mean difference, 1.39%; 95% CI, −1.48% to 4.26%; P = .33), and PEMAT actionability (78.57% vs 77.93%; mean difference, 0.63%; 95% CI, −3.14% to 4.40%; P = .73) were not statistically significant. Conclusion: ChatGPT-generated gastrointestinal patient education demonstrated similar quality, understandability, and actionability compared with professional society materials but was written at a significantly higher reading level. Improving readability may enhance accessibility and support the safe integration of AI-generated patient education.

Article activity feed