Context and Layers in Harmony: A Unified Strategy for Mitigating LLM Hallucinations

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models, despite their strong performance, frequently produce hallucinated content by excessively relying on pre-trained knowledge and overlooking newly provided prompts. We introduce LACD, a technique that dynamically rebalances probability distributions across layers, ensuring critical context is not overshadowed. By emphasizing new prompt information, LACD alleviates lower-layer dominance and mitigates hallucinations. On the HotPotQA dataset, LACD surpasses naive context augmentation by roughly 2.2% in EM, matching or outperforming advanced approaches such as DoLa and CAD. LACD also demonstrates robust gains on SQuAD, underscoring its capacity to reduce hallucinations while improving factual consistency. Overall, these findings highlight the importance of carefully integrating newly provided context with pre-trained knowledge to achieve more reliable language generation.

Article activity feed