Advancing Breast Cancer-AI Diagnostics: An Explainable Deep Learning Model Using 2D Grayscale Ultrasound Imaging
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background
Breast cancer stands as the primary reason for fatality in female patients from cancer worldwide. The diagnostic precision of ultrasound imaging depends on operator skills because it lacks invasive procedures but remains accessible through wide availability. The medical field needs automated systems which provide explainable results to help doctors achieve better breast lesion diagnosis accuracy.
Objectives
This research presents a new deep learning diagnostic system based on MobileNetV2 architecture which achieves optimal results for analyzing 2D grayscale (B-mode) ultrasound images. The system combines Grad-CAM explainability tools with clinical-grade functionality to generate interpretable results for both correct and incorrect predictions.
Methods
The BUSI dataset served as the basis for model training and evaluation because it contained a total of 780 ultrasound images that contains 437 benign images, 210 malignant images and 133 normal images. We divided the data into training and validation sets which made up 80% and 20% of the total data respectively. We then standardized and normalized all images by resizing them to 192×192 pixels and converting them into 3-channel images. The model design included frozen base layers, global average pooling, dropout regularization and a fully connected classification head. The model received 7 training runs with a total of 150 epochs with 16 samples per batch using adam optimizer and categorical cross-entropy as the loss function.
Results
The proposed model achieved 90.1% validation accuracy, 0.90 precision, 0.90 recall, 0.90 F1-score and 0.88 macro-averaged AUC. The Grad-CAM visualizations showed precise localization of features within the lesion area. The t-SNE projection of learned features displayed three separate lusters which corresponded to malignant, benign and normal tissue types. The system produced most of its errors when it failed to distinguish between benign and malignant lesions that appeared very similar to each other.
Conclusion
The system provides both high diagnostic precision and clinical understanding which represents a major breakthrough for ultrasound-based breast cancer screening systems. Further research will direct upcoming work toward validating the system externally and implementing real-time functionality, obtaining necessary regulatory approvals as well as inclusion for multi-modality in deep learning.