EntroMamba:Efficient Entropy Modeling for Learned Image Compression via Selective State Spaces

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Entropy modeling, which predicts the probability distribution of the quantized latent representation, is pivotal for rate-distortion optimization in learned image compression. However, existing CNN- and Transformer-based entropy approaches face a critical trade-off: capturing long-range dependencies incurs prohibitive computational overhead, while efficient local models sacrifice global context. To resolve this dilemma, we proposed a Mamba-based entropy approach, termed EntroMamba, which jointly optimizes modeling capacity and inference efficiency through two innovations: (1) HyperMamba2D, a hyperprior extractor to capture long-range spatial dependencies via 2D selective state-space scanning; (2) HyConMaskMamba, a causal dual-branch module that fuses local (via masked convolution) and global autoregressive context (via masked Mamba). Experimental results show that the proposed method achieves superior rate–distortion performance over state-of-the-art learned image compression approaches while maintaining a favorable balance between compression efficiency and computational complexity.

Article activity feed