Predicting Prostate Cancer Without a Prostate: A Potential Problem with AI
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Machine learning (ML) algorithms have demonstrated great potential for the identification and classification of prostate cancer from Magnetic Resonance (MR) Imaging data. Many of these algorithms remain a “black-box,” however, and debate persists as to how and if they should be explained. This study hypothesized that a widely-used family of methods, Convolutional Neural Networks (CNNs), may identify patterns that are not relevant to a clinician without model explainability. The purpose of this study was to determine if a CNN could classify prostate cancer on MR images without using the cancerous lesions -- or even the entire prostate – in the training process. We used 126 T2-weighted MR images each containing an abnormal prostate lesion to create two pairs of image sets: 1a) full cross-sectional images of the pelvis (full-CSI), 1b) full-CSI with prostate removed, 2a) segmented images of the prostate, and 2b) segmented images of the prostate with the lesion of interest removed. Residual Neural Network (ResNet) algorithms were trained and tested on the images, and accuracy and area under the receiver operating characteristic (AUC) were calculated. All algorithms performed well (accuracy of 81-99%, AUC of 0.83-0.99) even when a) trained on images containing the prostate/prostate lesion and tested on images with no prostate or prostate lesion or b) trained and tested on images with no prostate or prostate lesion. These findings support the need for explainable artificial intelligence (XAI) to ensure algorithms are arriving at clinically useful decisions.
Significance Statement
This study found that machine learning models built to classify clinically significant prostate cancer using predictive frameworks that rely on automatic feature detection (CNN - ResNet) can achieve high accuracy without evaluating the region of interest (in this case the prostate tissue). Although interesting, these models would not be viable in clinical practice. These results suggest rigorous testing and incorporation of explainability methods is urgently needed in machine learning models for clinical medicine to ensure models relying on automatic feature detection methods like CNNs select features that are clinically relevant.