CDR-LWP: Layer-Wise Probability Fusion and Interpretable Deep Learning for Multi-Stage Diabetic Retinopathy Classification

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Diabetic retinopathy is one of the leading causes of visual impairment and blindness among individuals with diabetes. This em- phasizes the need for accurate and early classification to enable timely intervention. This study proposes a novel deep learning framework based on VGG16 for classifying DR into five severity levels. In contrast to con- ventional approaches that rely exclusively on final-layer outputs, the pro- posed model leverages features extracted from all convolutional layers, thus capturing both low- and high-level visual representations. These multi-scale features are processed through fully connected layers to esti- mate layer-wise probability distributions, which are then aggregated us- ing a weighted network to perform the final classification. To enhance fea- ture refinement and discriminative capability, a Fusion Refinement Block (FRB) is incorporated to improve multi-scale feature fusion, while a Spa- tial Attention (SA) mechanism is employed to focus on the most relevant retinal regions. Furthermore, oversampling is used to address class imbal- ance, and contrast-limited adaptive histogram equalization (CLAHE) is applied to improve the visibility of blood vessels in fundus images. The proposed model is evaluated on multiple benchmark datasets (IDRiD, APTOS, DDR, and EyePACS), achieving classification precision ranging from 0.8397% to 0.9372% and quadratic weighted kappa scores ranging from 0.8218% to 0.9623% with the three datasets, thus demonstrating its effectiveness and robustness in DR classification tasks. This project code is available at https://github.com/saifalkhaldiurv/CDR-LWP.git.

Article activity feed