Multi-Way Data Representation: A Comprehensive Survey on Tensor Decomposition in Machine Learning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Tensor decomposition has emerged as a fundamental tool in machine learning, enabling efficient representation, compression, and interpretation of high-dimensional data. Unlike traditional matrix factorization methods, tensor decomposition extends to multi-way data structures, capturing complex relationships and latent patterns that would otherwise remain hidden. This paper provides a comprehensive overview of tensor decomposition techniques, including CANDECOMP/PARAFAC (CP), Tucker, and Tensor Train (TT) decompositions, and their applications in various machine learning domains. We explore optimization strategies, computational challenges, and real-world case studies demonstrating the effectiveness of tensor methods in areas such as natural language processing, recommender systems, deep learning compression, and biomedical informatics. Furthermore, we discuss emerging trends and future research directions, including the integration of tensor decomposition with deep learning, scalability improvements, and applications in quantum computing. As machine learning continues to evolve, tensor decomposition is poised to play an increasingly critical role in data-driven discovery and model interpretability.