Kolmogorov-Arnold Networks for Interpretable and Efficient Function Approximation
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Kolmogorov--Arnold Networks (KANs) represent a novel class of neural architectures that are inspired by the classical Kolmogorov--Arnold representation theorems, which assert that any multivariate continuous function can be expressed as a finite superposition of continuous univariate functions and addition. This insight leads to a fundamentally different paradigm from conventional deep neural networks: rather than stacking layers of affine transformations and pointwise activations, KANs apply learned univariate transformations directly to individual inputs and linearly combine the results, preserving a modular and interpretable structure.In this survey, we provide a comprehensive overview of KANs from both theoretical and practical perspectives. We begin by tracing their mathematical foundations in classical approximation theory and their relationship to universal function approximators. We then explore the architecture of modern KAN implementations, including spline-based and neural parameterizations of univariate functions, and examine their expressive power in comparison to traditional multilayer perceptrons (MLPs). The survey further discusses optimization strategies, training dynamics, and computational considerations, highlighting the benefits and trade-offs of KANs in real-world settings.We analyze a broad range of applications in regression, scientific modeling, symbolic regression, and physics-informed learning, demonstrating how KANs can provide high accuracy with fewer parameters and improved interpretability. In doing so, we identify emerging trends, such as hybrid models that combine KANs with deep architectures, and suggest directions for future research.Our goal is to present Kolmogorov--Arnold Networks not only as a theoretically elegant construct but also as a practical tool for interpretable, efficient, and structured machine learning. This survey aims to foster a deeper understanding of KANs and to serve as a resource for researchers and practitioners interested in exploring this growing frontier of neural network design.