Context Aware Fusion for Brain Disease Detection via Hybrid Decomposition and Optimization

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

background- Integrating complementary information from multimodal images into single fused image is crucial for early and accurate diagnosis. Method- An innovative fusion framework is featuring Cardinal Sub-cardinal Directional Associated Filtering (CSDAF) to decompose input image (I) into Gross Layer (GL) and Residual Layer (RL) while another Fine Layer (FL) extraction is from I by applying a unique Moving Average Convergence Divergence (MACD) filtering technique. Dual activity measurement strategy is performed by a twin channel Siamese Convolutional Neural Network (SCNN) and Self-Adaptive Pulsed Coupled Neural System (SAPCNS) to generate a combined Activity Map of Gross Layers (AM GL ) and Fine Layers (AM FL ) respectively while AM GL is undergone through multi-scale decomposition for context-aware fusion to obtain Fused Gross Layer (GL FUSED ) by bi-decision rule based on Similarity Measuring Coefficient (SMC) and a parametric threshold (τ) obtained through a novel hybridization of World Cup (WC)-Grey Wolf Optimization (GWO) algorithms whereas Fused Residual Layer (RL FUSED ) is derived via Cloud Coefficient (CC) through a Statistical Cloud Model (SCM). Results- Experimental results on standardized benchmark databases demonstrate the superior performance of the proposed algorithm over existing methods, both qualitatively and quantitatively. It is observed that information integration (EN), clarity (SD and TMQI), edge quality (AG), activity level (SF) and simultaneous visualization (NMI), achieving up to a 10.5422%, 11.3300411%, 25.578%, 36.95%, 10.4448%, 2.0496% increases in terms of Entropy (EN), Standard Deviation (SD), Average Gradient (AG), Spatial Frequency (SF), Normalized Mutual Information (NMI) and Tone Mapped Quality Index (TMQI) is improved in the proposed method with respect to the compared state-of-the-art techniques. Conclusion- The proposed image fusion algorithm has been shown to effectively preserve intricate details without compromising clarity, activity level preservation as evidenced by comprehensive qualitative and quantitative evaluations which may offer significant advantages in disease analysis, precise therapy and in broader image processing applications.

Article activity feed