Readability, Reliability, and Quality of Nursing Care Plan Texts Generated by Chatgpt

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background: Research on ChatGPT-supported nursing care plan texts plays a critical role in making nursing education more innovative and accessible. These studies strengthen education by improving the readability, reliability, and quality of the texts. Purpose: This study aims to evaluate the readability, reliability, and quality of nursing care plan texts generated by ChatGPT. Methods: The study sample consisted of 50 texts generated by ChatGPT based on selected nursing diagnoses from NANDA 2021–2023. These texts were evaluated using a descriptive criteria form, the DISCERN tool, and readability indices including the Flesch Reading Ease Score (FRES), Simple Measure of Gobbledygook (SMOG), Gunning Fog Index, and Flesch-Kincaid Grade Level (FKGL). Results: According to our findings, the readability level of the nursing care plans generated by ChatGPT was significantly higher than the recommended 6th-grade level (P < .001). The mean DISCERN score was 45.93 ± 4.72, indicating a moderate level of reliability for all evaluated texts. Additionally, 97.5% of the texts also achieved moderate scores on the information quality subscale. A positive and statistically significant correlation was found between the number of verifiable references and both the reliability (r = 0.408) and quality (r = 0.379) scores of the texts (P < .05). Conclusion: It should be noted that these AI-based chatbot tools cannot replace comprehensive patient care plans. In AI applications, it is recommended that the readability of generated content be improved, reliable references be included, and all outputs be reviewed by a professional team.

Article activity feed