D2W-GAN: Wavelet based Attention-Infused GAN for Enhanced Image Raindrop Removal in Vision System

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Raindrop lenses and image aberration are key problems that arise in computer vision applications, such as autonomous driving, surveillance, and environmental monitoring cameras. In this restoration work, we present a revolutionary scale-integrative method for raindrop removal that uses the Dual-Tree Complex Wavelet Transform (DT-CWT) in conjunction with U shaped network (U-Net), attention mechanisms, and Generative Adversarial Networks (GANs), called D2W-GAN. Most of the current reconstruction methods lack effectiveness in removing raindrop related of distortions effectively while preserving the image quality. This paper shows that the DT-CWT can accurately capturing both the details of the high frequency information and the general structure of the low frequency information, which enables the model to effectively separate raindrop artifacts from the original image. The restoration quality is improved because of the attention mechanism assistance in the network identification of raindrop-affected blobs. The GAN framework is employed to further enhance the output of the model, to produce results that look natural and consistent. D2W-GAN outperforms current state-of-the-art methods in terms of restored image quality and structural similarity, as measured by PSNR and SSIM. Furthermore, our proposed method proved to be efficient in computational time which is suitable for real time applications. By leveraging wavelet transforms, attention-based learning, and adversarial completion, D2W-GAN sets a new standard in image restoration for complex, uncontrolled environments.

Article activity feed