LTSMiTransformer: Learnable Temporal Sparsity and Memory for Efficient Long-Term Time Series Forecasting
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Long-term multivariate time series forecasting is critical in domains like finance, climate science, and infrastructure planning, but it faces challenges like high dimensionality, computational inefficiency, and long-range dependency capture. While transformers excel at modelling sequential data, their quadratic complexity and reliance on heuristic sparsity limit scalability. To address these issues, we propose the Learnable Temporal Sparse Memory iTransformer (LTSMiTransformer) , a novel architecture integrating three key innovations: Learnable Temporal Sparse Attention : Dynamically identifies relevant time steps to reduce computational overhead. Memory-Augmented Module : Captures long-term dependencies without excessive memory consumption. Unified Embedding Strategy : Enhances feature representation across heterogeneous datasets. Extensive experiments on eight benchmarks demonstrate that LTSMiTransformer achieves state-of-the-art accuracy, particularly in long-horizon settings, while maintaining computational efficiency. Our analysis highlights its robustness to periodic patterns, trend shifts, and cross-domain adaptation. We also discuss limitations (e.g., hyper parameter sensitivity) and provide actionable insights for future work. Codes will be released upon paper acceptance .