Interpretable Self-Supervised Contrastive Learning for Colorectal Cancer Histopathology: GradCAM Visualization

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Accurate analysis of colorectal cancer from histopathological images is pivotal for reliable diagnosis and treatment planning. In this study, I introduce a novel framework that utilizes self-supervised contrastive learning (SSCL) for feature extraction and downstream classification of two classes of colorectal cancer - hyperplastic polyp (HP) and sessile serrated adenoma (SSA). By pre-training a deep encoder (ResNet50) using a contrastive learning paradigm, the proposed model learns discriminative representations from unlabeled data, with minimal dependence on extensive manual annotations. These representations are subsequently fine-tuned in a supervised setting that generalizes well to achieve robust classification performance. The implemented model achieved an accuracy of 85.86%. To further enhance the clinical relevance of the approach, I incorporate Gradient-weighted Class Activation Mapping (GradCAM) technique to generate visual explanations that highlight the regions contributing most to the network’s decisions. This interpretability component enables pathologists to validate the model’s focus on key histopathological features, thereby enhancing trust in the automated system. Overall, the integrated approach, combining self-supervised contrastive learning with GradCAM interpretability, surpasses traditional deep convolutional neural networks approaches and achieved better performance. This has significant potential for improving diagnostic accuracy and fostering transparency in colorectal cancer histopathology assessments.

Article activity feed