Enhancing Faithfulness in Text Summarization: A Hallucination Detection and Mitigation Framework Based on LLM

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent advancements in large language models (LLMs) technologies have spurred notable progress in automatic text summarization. However, the field continues to grapple with significant challenges, particularly hallucination issues, where summaries incorporate information absent from the source text. Such issues undermine the factual accuracy of summaries and contribute to user dissatisfaction. Existing methods face limitations in effectively detecting and mitigating hallucinations, often lacking transparency in their underlying mechanisms. This paper presents a hallucination detection and mitigation framework that employs the Q-S-E methodology to enable the quantitative detection of hallucinations in summaries. Leveraging large language models (LLMs), the framework introduces an iterative hallucination resolution mechanism, enhancing the transparency of the modification process and improving the faithfulness of text summarization. Experiments conducted on three benchmark datasets CNN/Daily Mail, PubMed, and Arxiv demonstrate that our approach markedly improves the factual consistency of summaries while preserving their informational completeness.

Article activity feed