Enhancing Medical Image Segmentation through Negative Sample Integration: A Study on Kvasir-SEG and Augmented Datasets

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Colorectal cancer (CRC) remains a leading cause of cancer-related mortality worldwide, with early and accurate detection being critical for improving patient outcomes. Automated image segmentation using deep learning has emerged as a transformative tool for identifying colorectal abnormalities in medical imaging. This study conducts a comparative analysis of three prominent deep learning architectures—U-Net, SegNet, and ResNet—for colorectal cancer image segmentation, evaluating their performance on a custom dataset comprising 1,800 images (1,000 polyp images from the Kvasir-SEG dataset and 800 polyp-free images from the WCE Curated Colon Dataset). The dataset was preprocessed to a uniform resolution of 256 × 256 pixels and partitioned into training, validation, and test sets. Quantitative and qualitative results demonstrate that U-Net outperforms SegNet and ResNet, achieving superior segmentation accuracy (validation accuracy of 0.95) and robustness, particularly when trained on datasets that include negative samples. SegNet showed the sign of overfitting and delivered unstable results, while ResNet struggled to generalize effectively. The integration of negative images improved specificity by decreasing false positive rates. Overall, the results demonstrate U-net as the most efficient in precise polyp segmentation, providing significant implications for robust diagnostic system development.

Article activity feed