Simplifying cardiology research abstracts: assessing ChatGPT’s readability and comprehensibility for non-medical audiences

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial Intelligence (AI)-powered chatbots are increasingly utilized in academic medical settings for tasks such as evidence synthesis and manuscript drafting. This study evaluates the ability of ChatGPT, an AI-powered tool, to simplify cardiology research abstracts for non-medical audiences while retaining essential information. A total of 113 abstracts from Circulation were processed by ChatGPT to be rewritten at a 5th-grade reading level. Readability was assessed using word and character counts, Flesch-Kincaid Grade Level (FKGL), and Reading Ease (FKRE) scores, while a panel of five physicians and five laypeople evaluated the simplified texts for accuracy, completeness, and readability. The simplification significantly reduced word and character counts (p<0.0001) and improved readability from a college graduate level to an 8th-9th grade level (p<0.001). Both physicians and laypeople found the simplified abstracts easier to understand, but some patients expressed concerns about oversimplification and missing details. Overall, ChatGPT proved effective in simplifying cardiology research while largely preserving content integrity, though further refinement of AI tools is needed to ensure accuracy.

Author Summary

In this study we investigated how artificial intelligence (AI), specifically ChatGPT, can augment comprehensibility of complex cardiology research and thus make it more accessible to people without a medical background. We focused on simplifying abstracts by having ChatGPT rewrite them at a 5th-grade reading level. We analyzed 113 cardiology abstracts from manuscripts published in the journal Circulation , measuring readability and word counts before and after the AI simplification process. A group of five physicians and five non-medical participants then reviewed the simplified versions to assess whether they remained accurate, complete, and easy to understand. Our results revealed that ChatGPT significantly shortened the abstracts and made them easier to read, improving readability from a college level to an 8 th or 9 th grade level. Both medical experts and non-experts agreed the simplified abstracts were clearer. However, some non-medical participants raised concerns that important details might be lost in the simplification process. This highlights a key challenge: while AI tools like ChatGPT can improve access to scientific information, further refinement is needed to balance simplicity with accuracy. Our work underscores the potential of AI in bridging the gap between medical research and public understanding, making complex health information more approachable for everyone.

Article activity feed