Towards Scalable and Resource-Conscious Reasoning: A Survey of Efficient Models

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Reasoning—the ability to draw conclusions, infer relationships, and solve complex problems—is a cornerstone of artificial intelligence. As reasoning models grow increasingly powerful, achieving efficiency in their computation, memory usage, and data requirements has become a critical challenge. This survey provides a comprehensive review of efficient reasoning models, spanning symbolic, neural, and neuro-symbolic paradigms. We systematically analyze foundational concepts, architectural innovations, algorithmic strategies, and training methodologies that drive efficiency in reasoning systems. We also examine benchmark datasets and empirical evaluations that highlight trade-offs between accuracy and computational cost across different reasoning tasks. Furthermore, emerging trends such as adaptive computation, modular design, neuro-symbolic integration, and hardware-aware optimization are discussed in detail. By synthesizing advances from diverse approaches, this work aims to guide researchers and practitioners in designing reasoning models that are not only effective but also scalable and resource-conscious, ultimately advancing the deployment of intelligent systems in real-world, resource-limited settings.

Article activity feed