Multi-focus image fusion based on dynamic threshold neural P systems and difference of gaussian clarity
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
To address the limitations of optical lens depth of field and integrate complementary information, Multi-Focus Image Fusion merges multiple images of the same scene taken at different focal points into a fully clear image. Although decision-map methods are commonly used in MFIF, accurately defining boundaries between focused and defocused areas remains a challenge. The Dynamic Threshold Neural P system, a distributed parallel computing model inspired by the cross-cortical model. Using dynamic thresholds and spiking mechanisms, it enables us to improve discrimination between areas in focus and out of focus. Building an enhanced framework called the Multi-scale Gaussian Contrast Synergistic Dynamic Threshold Neural P System (MGCS-DTNP) has been proposed. This method employs Gaussian contrast filtering to extract gradient features, leveraging the fact that focused regions exhibit sharper gradients. These features serve as input stimuli to the DTNP system, generating an initial decision map based on diffusion time parameters. After post-processing with small-area removal and median filtering, a refined decision map is produced. The final fused image is reconstructed by integrating regions from the source images according to this map. Experiments on the Lytro and MFFW datasets demonstrate that MGCS-DTNP outperforms 12 existing algorithms across six evaluation metrics, showing superior visual quality and fusion performance.