Enhancing Breast Cancer Classification with Stacked Convolutional Neural Networks for Whole-Slide Image Analysis
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study explores an advanced approach to automatic breast cancer characterization by leveraging convolutional neural networks (CNNs) trained on 224×224 pixel patches from whole-slide images (WSIs). I compared two architectures, VGG-16 and WRN-4-2, across two classification problems: benign versus cancer, and benign, DCIS, versus IDC. Results indicated that WRN-4-2 slightly outperformed VGG-16 on the two-class problem but underperformed on the three-class problem. Stacking CNNs with increased input sizes (512×512, 768×768, and 1024×1024 pixels) significantly improved accuracy, with the 1024×1024 network achieving the highest performance but at a higher computational cost. Dense prediction maps were generated from these stacked networks, enabling effective whole-slide image classification with approximately 90% accuracy for detecting cancer. However, performance for distinguishing between benign, DCIS, and IDC lesions was lower at 76.6%. Future improvements could involve stain standardization, enhanced resolution of prediction maps using architectures like U-net, and comparative studies of multi-resolution approaches to further refine cancer detection and classification accuracy.