Reasoning in Large Language Models: A Survey
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
With the rapid advancement of artificial intelligence (AI) technologies, large language models (LLMs) exhibit remarkable capabilities in problem-solving. While LLMs have revolutionized natural language processing (NLP), their inherent limitations in structured reasoning impede performance on complex AI tasks that demand multi-step logic, contextual comprehension, and knowledge synthesis. This paper provides a comprehensive overview of approaches to bridge this gap, categorizing reasoning techniques into basic and advanced paradigms. We analyze cutting-edge strategies—including prompt engineering, retrieval-augmented reasoning, and neural-symbolic architectures—which offer diverse perspectives on reasoning across the phases of query formulation, information retrieval (IR), and answer generation. By establishing a taxonomy of reasoning-enhanced IR models and exploring their reasoning applications, we illustrate measurable improvements in the accuracy and interpretability of contemporary LLMs, particularly IR-related models. Nevertheless, persistent challenges in multi-hop reasoning, output consistency, and domain adaptation call for future efforts focused on modular systems, dynamic knowledge integration, and reasoning-aware training frameworks. Our findings and syntheses emphasize that the next evolution of LLMs—and certain IR models alike—resides not merely in retrieving information, but in the true ability to understand, retain, and reason with information, mirroring human cognition.