Bidirectional Aware Vision Mamba for Lightweight Single Image Super-Resolution
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Single image super-resolution (SISR) aims to recover high-resolution images from their low-resolution counterparts. Despite significant progress driven by deep learning, existing CNN- and Transformer-based methods struggle to balance reconstruction fidelity with computational efficiency. CNNs suffer from limited receptive fields, while Transformers incur prohibitive computational costs due to their quadratic attention complexity. Recent State Space Models (SSMs) have emerged as promising alternatives, offering linear complexity and strong long range modeling capabilities. However, standard Mamba processes images via unidirectional 2D scanning, inadequately capturing rich global visual context. To address this limitation, we propose the Bidirectional Aware Mamba Network (BAMN), a novel lightweight U-shaped architecture that leverages Bidirectional Scan Mamba Blocks (BSMB) to comprehensively model contextual information from both forward and backward directions. BAMN further incorporates a Global Context Fusion Block (GCFB) within skip connections to effectively aggregate multi-scale features across encoder and decoder stages, enabling high-fidelity reconstruction of both local textures and global structures. Extensive experiments on standard benchmarks demonstrate that BAMN outperforms state-of-the-art methods in both quantitative metrics and visual quality, while maintaining a compact model size and low computational overhead.