Deep Learning for Tumor Segmentation and Multiclass Classification in Breast Ultrasound Images Using Pretrained Models
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Breast ultrasound image segmentation and classification are the two crucial steps for early diagnosis of cancer. In this work we developed a breast cancer segmentation and multiclass classification artificial intelligence tools based on pretrained models. The proposed workflow includes both the development of a segmentation model architecture and second the development of a series of classification models to classify the ultrasound greyscale images into normal, benign or malignant. The training and testing of the pretrained models were performed using the Breast Ultrasound Images (BUSI dataset). For the image segmentation task, the models were trained on the images while using masks as target variable. In the multiclass classification, each image was provided with accurate label “benign”, “normal” or “malignant” and used to train a multiclass classifier. Optuna was used for hyperparameter optimization and for the testing of various pretrained models to determine the best encoder (ResNet18, EfficientNet-B0 & MobileNetV2)-decoder (U-Net, U-Net++, DeepLabV3) image segmentation architecture. For multiclass classification, five different pretrained models (ResNet18, DenseNet121, InceptionV3, MobielNetV3, GoogleNet) were optimized and tested for their ability to classify breast cancer images. The developed Image segmentation models performed well in terms of delineating the lesion in the breast ultrasound images. DeepLabV3 outperformed other segmentation architectures with consistent performance across train, validation and test images with Dice Coefficients of 0.87, 0.80 and 0.83 respectively. ResNet18:DeepLabV3 achieved an Intersection over Union score of 0.78 during training. ResNet18: U-net++ achieved best Dice coefficient (0.83) and IoU (0.71) and AUC score of 0.91 on the test (unseen) dataset when compared to other models. For classification of breast cancer images, ResNet18 achieved an F1 score of 0.95 and an accuracy of 0.90 on the train dataset, while InceptionV3 outperformed other models on the test dataset with an F1 score of 0.75 and accuracy of 0.83. We demonstrate a comprehensive approach to automate the image segmentation and multiclass classification of breast cancer ultrasound images into benign, malignant or normal using transfer learning models on an imbalanced ultrasound image dataset.