Exploring the Explainability of a Machine Learning Model for Prostate Cancer: Do Lesions Localize with the Most Important Feature Maps?
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As the use of AI grows in clinical medicine, so does the need for better explainable AI (XAI) methods. Model based XAI methods like GradCAM evaluate the feature maps generated by CNNs to create visual interpretations (like heatmaps) that can be evaluated qualitatively. We propose a simple method utilizing the most important (highest weighted) of these feature maps and evaluating it with the most important clinical feature present on the image to create a quantitative method of evaluating model performance. We created four Residual Neural Networks (ResNets) to identify clinically significant prostate cancer on two datasets (1. segmented prostate image and 2. full cross sectional pelvis image (CSI)) and two model training types (1. transfer learning and 2. from-scratch) and evaluated the models on each. Accuracy and AUC was tested on one final full CSI dataset with the prostate tissue removed as a final test set to confirm results. Accuracy, AUC, and co-localization of prostate lesion centroids with the most important feature map generated for each model was tabulated and compared to co-localization of prostate lesion centroids with a GradCAM heatmap. Prostate lesion centroids co-localized with any model generated through transfer learning ≥97% of the time. Prostate lesion centroids co-localized with the segmented dataset 86 > 96% of the time, but dropped to 10% when segmented model was tested on the full CSI dataset and 21% when model was trained and tested on the full CSI dataset. Lesion centroids co-localized with GradCAM heatmap 98% > 100% on all datasets except for that trained on the segmented dataset and tested on full CSI (73%). Models trained on the full CSI dataset performed well (79% > 89%) when tested on the dataset with prostate tissue removed, but models trained on the segmented dataset did not (50 > 51%). These results suggest that the model trained on the full CSI dataset uses features outside of the prostate to make a conclusion about the model, and that the most important feature map better reflected this result than the GradCAM heatmap. The co-localization of medical region of abnormality with the most important feature map could be a useful quantitative metric for future model explainability.