Context and Layers in Harmony: A Unified Strategy for Mitigating LLM Hallucinations

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models, despite their strong performance, frequently produce hallucinated content due to excessive reliance on pre-trained knowledge while insufficiently integrating newly provided context. We introduce LACD, a technique that dynamically rebalances probability distributions across layers, ensuring critical context is not overshadowed. By emphasizing new prompt information, LACD alleviates lower-layer dominance and mitigates hallucinations. On the HotPotQA dataset, LACD outperforms basic context injection baselines by approximately 2.2% in exact match (EM) and matches or exceeds advanced methods such as DoLa and CAD. LACD also demonstrates robust gains on SQuAD, underscoring its capacity to reduce hallucinations while improving factual consistency. Overall, these findings highlight the importance of carefully integrating newly provided context with pre-trained knowledge to achieve more reliable language generation.

Article activity feed