Deep learning model applied to Real-Time Delineation of Colorectal Polyps

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background: Deep learning models have shown considerable potential to improve diagnostic accuracy across medical fields. Although YOLACT has demonstrated real-time detection and segmentation in non-medical datasets, its application in medical settings remains underexplored. This study evaluated the performance of a YOLACT-derived Real-time Polyp Delineation Model (RTPoDeMo) for real-time use on prospectively recorded colonoscopy videos. Methods: Twelve combinations of architectures, including Mask-RCNN, YOLACT, and YOLACT++, paired with backbones such as ResNet50, ResNet101, and DarkNet53, were tested on 2,188 colonoscopy images with three image resolution sizes. Dataset preparation involved pre-processing and segmentation annotation, with optimized image augmentation. Results: RTPoDeMo, using YOLACT-ResNet50, achieved 72.3 mAP and 32.8 FPS for real-time instance segmentation based on COCO annotations. The model performed with a per-image accuracy of 99.59% (95% CI: [99.45% - 99.71%]), sensitivity of 90.63% (95% CI: [78.95% - 93.64%]), specificity of 99.95% (95% CI: [99.93% - 99.97%]) and a F1-score of 0.94 (95% CI: [0.87 – 0.98]). In validation, out of 36 polyps detected by experts, RTPoDeMo missed only one polyp, compared to six missed by senior endoscopists. The model demonstrated good agreement with experts, reflected by a Cohen’s Kappa coefficient of 0.72 (95% CI: [0.54 – 1.00], p <0.0001). Conclusions: Our model provides new perspectives in the adaptation of YOLACT to the real-time delineation of colorectal polyps. In the future, it could improve the characterization of polyps to be resected during colonoscopy.

Article activity feed