Deep Learning Image Classification with Explainability Using SHAP: A Case Study with ResNet-50 and CIFAR-10

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper thoroughly looks at how to combine cutting-edge explainability techniques with deep learning for image classification. We optimize a Deep Convolutional Neural Network (CNN) for the CIFAR-10 benchmark image classification task. More specifically, we use a ResNet-50 architecture that has already been trained on the ImageNet dataset. The experimental pipeline's strict methodology includes a thorough model training protocol, data preprocessing with augmentation, and a performance evaluation that looks at many different factors. The model's high accuracy in classifying shows how well transfer learning works for this job. This work is important because it goes beyond just performance metrics by using a game-theoretically based method called SHAP (SHapley Additive exPlanations) to explain the model's predictions. SHAP gives you clear, easy-to-understand visualizations that show how each pixel affects classification choices by making pixel-level attributions. The results show that SHAP can figure out why both correct and incorrect predictions were made, and that the model learns features that are important for meaning. The important "black-box" problem in deep learning is solved by combining high performance with deep interpretability. This shows a way to make AI systems that are more transparent, reliable, and trustworthy, so they can be used in real-world situations where the stakes are high.

Article activity feed