UniSkin-Net: A Unified Multi-Task Framework for Skin Cancer Segmentation, Classification, and Detection

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Skin cancer can be considered one of the most widespread types of cancer, so its early diagnosis is essential for increasing patient survival rates. As the number of dermatoscopic images increases and the range of skin lesion types expands, such a system must be accurate and efficient at the image segmentation, classification, and detection stages. This paper introduces UniSkin-Net, a novel multi-task learning architecture that jointly addresses skin cancer segmentation, classification, and detection tasks. This integrated approach will help achieve better results and accuracy in diagnosing skin cancer, as judged by conventional methods. Furthermore, the data set employed in this work is the HAM10000, comprising 10,015 dermatoscopic images depicting seven categories of skin lesions. This dataset is then used to train and evaluate the effectiveness of UniSkin-Net with a significant focus on the segmentation and classification loss functions. Additionally, the paper relies on a well-designed deep convolutional neural network (CNN) architecture to train the databases and facilitate multi-task learning. We utilize various measures, including accuracy, precision, recall, F1-score, AUC, Dice, and IoU coefficients. Our combined classifier achieves an accuracy of up to 99.98%, accompanied by high precision, recall, and F1-score across all skin lesion types. To summarize, with the aid of UniSkin-Net, we have presented a powerful approach for skin cancer diagnosis. Continuing this work in the future will involve studying generalization to other datasets and investigating possibilities for integrating the proposed method with clinical environments.

Article activity feed