Causal-RAG: Causally-Augmented Retrieval for Hallucination-Free Clinical Decision Support in Low-Resource Settings

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This research addresses a critical challenge in using artificial intelligence for healthcare in low-resource settings: the tendency of AI models to produce confident but incorrect information, a phenomenon known as hallucination. We propose Causal-RAG, a novel framework that enhances standard Retrieval-Augmented Generation (RAG) by integrating principles of causal inference. The goal is to ground the AI's responses in robust, causally-relevant evidence rather than mere correlations. We built a prototype and tested it on a clinical question-answering task. Our findings reveal a fundamental trade-off: while a standard RAG system achieved high accuracy but displayed a dangerous 'yes' bias, our Causal-RAG approach successfully reduced this overconfidence, prioritizing safety. This work establishes a foundation for developing more trustworthy and reliable AI decision-support tools for clinical environments where data and expertise are scarce.

Article activity feed