Mitigating Hallucinations in Large Language Models: A Comparative Study of RAG-enhanced vs. Human-Generated Medical Templates

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The integration of Large Language Models (LLMs) is increasingly recognized for its potential to enhance various aspects of healthcare, including patient care, medical research, and education. The well-known LLM from Open AI: ChatGPT, a user-friendly GPT-4 based chatbot, has become increasingly popular. However, current limitations to LLMs, such as hallucinations, outdated information, and ethical and legal complications may pose significant risks to patients and contribute to the spread of medical disinformation. This study focuses on the application of Retrieval-Augmented Generation (RAG) to mitigate common limitations of LLMs like ChatGPT and assess its effectiveness in summarizing and organizing medical information. Up-to-date clinical guidelines were utilized as the source of information to create detailed medical templates. These were evaluated against human-generated templates by a panel of physicians, using Likert scales for accuracy and usefulness, and programmatically using BERTScores for textual similarity. The LLM templates scored higher on average for both accuracy and usefulness when compared to human-generated templates. BERTScore analysis further showed high textual similarity between ChatGPT- and Human-generated templates. These results indicate that RAG-enhanced LLM prompting can effectively summarize and organize medical information, demonstrating high potential for use in clinical settings.

Article activity feed