MML: MoE-Based Fusion of Mamba and LightTS for Sequence Prediction in Hyperspectral Image Compression
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Hyperspectral imagery provides rich spectral–spatial information crucial for Earth observation, but its massive data volume poses severe challenges for onboard transmission and storage. The widely used CCSDS-123.0-B-2 standard (low complexity, engineering feasibility) suffers from adaptive filtering predictor limitations: poor modeling of intricate spatio-spectral correlations and slow convergence. To address these issues, this paper introduces a novel predictor architecture, termed MML (Mixture-of-Experts fused Mamba– LightTS predictor), grounded in a Mixture-of-Experts (MoE) framework. The proposed design integrates the long-sequence modeling capability of the Mamba state-space model with the lightweight efficiency of LightTS, while employing a dynamic routing mechanism to adaptively determine expert contributions. Unlike autoencoder-based schemes, it directly performs pixel-level prediction and entropy coding for lossless and near-lossless compression, thereby enhancing prediction accuracy and resource efficiency while keeping computational complexity low. Experimental evaluations on NASA AVIRIS and China’s Gaofen-5 (GF-5) datasets demonstrate that the proposed method consistently outperforms CCSDS-123.0-B-2 in terms of compression ratio and error control under both lossless and high-fidelity scenarios. Furthermore, in complex scenes, it exhibits superior performance and robustness compared with state-of-the-art deep learning approaches, including Verdu, 1D-CAE, and SSCNet. These findings provide a promising avenue for onboard hyperspectral image compression, striking a balance between efficiency, low complexity, and support for lossless operation.