ExShall-CNN: An Explainable Shallow Convolutional Neural Network for Medical Image Segmentation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Explainability is essential for AI models, especially in clinical settings where understanding the model's decisions is crucial. Despite their impressive performance, black-box AI models are unsuitable for clinical use if their operations cannot be explained to clinicians. While deep neural networks (DNNs) represent the forefront of model performance, their explanations are often not easily interpretable by humans. On the other hand, using hand-crafted features extracted to represent different aspects of the input data and traditional machine learning models are generally more understandable. However, they often lack the effectiveness of advanced models due to human limitations in feature design. To address this, we propose ExShall-CNN, a novel explainable shallow convolutional neural network for medical image processing. This model enhances hand-crafted features to maintain human interpretability while achieving performance levels comparable to advanced deep convolutional networks, such as U-Net, for medical image segmentation. ExShall-CNN and its source code are publicly available at: https://github.com/MLBC-lab/ExShall-CNN

Article activity feed