Domain-Invariant Dehazing via Depth-Aware Transmission Estimation and Image Restoration

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Haze significantly degrades image quality and adversely affects the performance of downstream vision applications. Thus, dehazing is essential for restoring visual clarity. Most existing data-driven dehazing methods rely heavily on synthetic hazy-clean image pairs due to the scarcity of real-world paired data. However, these methods often suffer from limited generalization in real-world settings owing to the inherent domain shift between synthetic and real haze distributions. To address this challenge, we propose a Domain-Invariant Dehazing Network (DID-Net) which comprises two core components: a Depth-Guided Transmission Map Estimation Network (DTME-Net) and a Physics-Aware Dehazing Network (PDD-Net) . DTME-Net learns from real-world depth-transmission mappings to generate synthetic hazy images with realistic distributions, providing reliable data for enhanced cross-domain generalization. PDD-Net leverages depth-aware attention to modulate features by spatial depth, improving dehazing in complex scenes. We further employ post-optimization to refine its parameters for superior results. Extensive experimental results on real-world benchmarks demonstrate that the proposed method significantly mitigates the synthetic-to-real domain gap and outperforms state-of-the-art dehazing approaches both quantitatively and qualitatively.

Article activity feed