Retinex-Inspired Dual Attention Transformer for Low-Iight Enhancement
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Low-light image enhancement (LLIE) is a fundamental but challenging task due to non-uniform illumination, severe noise, and structure degradation under poor lighting. This paper proposes GLADFormer , a novel Transformer-based framework that extends the one-stage Retinex theory by introducing perturbation terms to jointly model illumination variation and visual corruptions. The architecture consists of three components: a contrastive illumination estimator that extracts discriminative light-up features; a hierarchical corruption restorer based on window attention; and a Pixel-Aware Gated Modulation (PAGM) module for pixel-level refinement. In particular, the restorer adopts a Light-Guided Attention Block (LGAB) which leverages a window-based attention mechanism—Local Chunked Masked Attention (LCMA)—to effectively model localized spatial context while fusing global exposure cues. This design enhances the ability to recover fine details and suppress noise in complex low-light scenes. Additionally, a contrastive loss encourages robust illumination representation learning. Extensive experiments on five LLIE benchmarks and one downstream detection task demonstrate that GLADFormer achieves state-of-the-art performance with strong generalization and low computational cost.Code is available at https://github.com/JJCcxk/GLADFormer.