Hallucinations in LLMs: Types, Causes, and Approaches for Enhanced Reliability

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large Language Models (LLMs) have transformed natural language processing and generation, enabling diverse applications ranging from content creation to decision support systems. However, a key challenge they face is hallucination—the generation of content that is factually incorrect, inconsistent, or entirely fabricated, yet often appears coherent and plausible. This survey comprehensively examines hallucinations in LLMs, categorizing them into intrinsic, extrinsic, amalgamated, and non-factual types, and analyzing underlying mechanisms such as knowledge overshadowing and contextual misalignment. We evaluate the impact of hallucinations in critical domains like healthcare, scientific research, and journalism, where misinformation can have serious consequences. On the other hand, we explore the potential for hallucinations to drive creativity in fields like art and design, where unconventional outputs can inspire innovation. The survey also reviews detection methods, including named entity recognition and probability-based approaches, alongside mitigation strategies such as prompt engineering, fine-tuning, and grounding techniques. Additionally, we discuss future research directions, focusing on improved evaluation methods, ethical considerations, and advanced integration techniques to enhance the reliability and ethical use of LLMs. By balancing accuracy with creativity, this survey aims to contribute to the development of more trustworthy and responsible large language models.

Article activity feed