Beyond Wavelets and Deep Learning: A Model-Based Fusion Framework Using Discrete Band-Limited Shearlets

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The limited dynamic range of standard digital imaging sensors often results in under- or over-exposed images, failing to capture the full radiance of a scene as perceived by the Human Visual System (HVS). While deep learning-based Multi-exposure Image Fusion (MEF) methods have recently dominated the field, they often suffer from dependencies on large datasets and lack interpretability. To address this, we propose a novel, model-based MEF framework leveraging the Discrete Band-Limited Shearlet Transform (DBLST). The shearlet transform provides a superior multi-scale and directional representation compared to traditional wavelets, making it exceptionally adept at capturing edges and textures across varying exposure levels—a crucial capability for high-quality fusion. Our method decomposes source images using DBLST and fuses the coefficients through specifically designed rules for low-pass and high-pass components. Extensive experiments on standard datasets demonstrate that the proposed algorithm not only significantly outperforms conventional wavelet-based methods but also achieves competitive performance against recent state-of-the-art approaches in terms of both subjective visual quality and objective metrics (including structural similarity, entropy, and average gradient). The results confirm DBLST as a powerful, efficient, and interpretable tool for high-dynamic-range image rendering, offering a compelling alternative to data-driven deep learning models.

Article activity feed