Underwater Image Enhancement Method Based on Vision Mamba

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

To address issues like haze, blurring, and color distortion in underwater images, this paper proposes a novel underwater image enhancement model called U-Vision Mamba, built on the Vision Mamba framework. The core innovation lies in a U-shaped network encoder for multi-scale feature extraction, combined with a novel multi-scale sparse attention fusion module to effectively aggregate these features. This fusion module leverages sparse attention to capture global context while preserving fine details. The decoder then refines these aggregated features to generate high-quality underwater images. Experimental results on the UIEB dataset demonstrate that U-Vision Mamba significantly reduces image blurring and corrects color distortion, achieving a PSNR of 25.65 dB and an SSIM of 0.972. Both comprehensive subjective evaluation and objective metrics confirm the model’s superior performance and robustness, making it a promising solution for improving the clarity and usability of underwater imagery in applications like marine exploration and environmental monitoring.

Article activity feed