Bias Detection in Plant Species Classification: A Grad-CAM Analysis Reveals Light-Color Dependencies Across CNN Architectures

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The widespread adoption of deep learning models for plant species classification has achieved remarkable accuracy, yet these models operate as “black boxes,” limiting their interpretability and trustworthiness in critical applications such as biodiversity assessment and agricultural monitoring. This study addresses the transparency challenge by applying Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize and interpret convolutional neural network (CNN) decision-making processes in plant classification tasks. We evaluated five architectures: two custom CNNs (Baseline and Improved) and three pre-trained models (VGG16, ResNet50, DenseNet121) on two comprehensive datasets containing 100 and 30 plant species, respectively, with a total of 69,354 images. DenseNet121 demonstrated superior classification performance, achieving 80.70% average accuracy on the 100-species dataset and 91.54% on the 30-species dataset. Through systematic Grad-CAM analysis, we identified a consistent bias toward light-colored plant features across all architectures, with activation intensities significantly higher for light-colored regions compared to green foliage. While Grad-CAM effectively highlighted decision-relevant regions and provided meaningful visual explanations, this color bias presents significant limitations for plant species lacking prominent light-colored characteristics, potentially affecting model reliability in real-world applications. Our findings contribute to the growing body of explainable AI research by providing the first comprehensive analysis of Grad-CAM limitations in botanical applications and establishing methodological guidelines for responsible deployment of interpretation tools in plant classification systems. These results emphasize the critical importance of bias detection and domain-specific validation when implementing explainable AI techniques in specialized domains.

Article activity feed