Enhanced Infrared and Visible Image Fusion via Correlation-Driven Rules and Parameter-Free Attention Mechanism

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Infrared and visible image fusion (IVIF) aims to integrate salient targets and detailed information into a single image suitable for both human perception and machine processing. However, many existing methods rely on human-designed fusion rules, which lack interpretability and often fail to effectively enhance detail parts. To address these issues, we propose an IVIF method based on correlation-driven fusion rules and a parameter-free attention module. This method retains valid information across layers and modalities, with fusion weights derived from cross-modal feature map correlations, enhancing robustness. We also introduce a parameter-free attention module to adaptively enhance texture details without additional trainable parameters. Experimental results on public datasets demonstrate the superiority of our method in detail retention and target highlighting, with quantitative evaluations showing improvements in metrics such as average gradient (AG) and Visual Information Fidelity (VIF) by up to 12% and 18% respectively, compared to state-of-the-art methods on the LLVIP dataset. The source code will be released at https://github.com/CharlesShan-hub/CPFusion.

Article activity feed