Ro-FusionGAN:An Adversarial Framework for High-Quality Multi-focus image fusion

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Multi-focus image fusion (MFIF) aims to synthesize an all-in-focus image from source images with varying focal depths. While deep learning has advanced this field, existing "process-imitation" approaches often rely on binary mask supervision, leading to boundary artifacts and a dependency on heuristic post-processing. To address these limitations, we propose Result-Oriented Fusion Generative Adversarial Network (Ro-FusionGAN), a novel result-oriented adversarial framework. Unlike methods that mimic intermediate focus maps, our framework directly optimizes the perceptual quality of the final fused image via a composite fusion-aware loss function. Furthermore, we introduce a differentiable Total Variation (TV) regularizer to autonomously enforce spatial smoothness, enabling the generation of soft decision maps and eliminating the need for post-processing. Extensive experiments demonstrate that Ro-FusionGAN outperforms eleven state-of-the-art methods in visual quality, quantitative metrics, and computational efficiency, yielding artifact-free images with natural focus transitions.

Article activity feed