Automated Mandibular Canal Segmentation on CBCT using Deep Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Objective: This study aims to develop a publicly accessible dataset for mandibular canal segmentation in cone beam computed tomography (CBCT) scans and to propose a framework for automated mandibular canal segmentation. Methods: A total of 236 CBCT scans were collected from the Stomatology Hospital of the Shantou University Medical College, and the mandibular canals in these files were finely annotated. A custom designed 3D UNet, named MyResUNet, along with two commonly used UNet models, were used as candidate models. Soft dice similarity coefficient (DSC) loss was used as the loss function. A post-processing step involving connected components analysis and removing small objects was applied during inference. Model performance was assessed using voxel accuracy (ACC), sensitivity (SEN), specificity (SPE),DSC(1), Hausdorff distance (HD), the 95th percentile Hausdorff distance (HD95), average surface distance (ASD), and average symmetric surface distance (ASSD). Results: The MCSTU dataset, which contains a development dataset and an independent test dataset comprising 218 and 18 CBCT images with fine-grained annotations, respectively, has been made publicly available. The validation loss of MyResUNet was lower than that of two commonly used models. The inclusion of post-processing significantly enhanced the performance, especially by reducing the HD metric. On the hold-out test dataset, the MyResUNet model achieved ACC, SEN, SPE, DSC, HD, HD95, ASD, ASSD with 95% confidence interval of 1 (1-1), 0.86 (0.83-0.87), 1 (1-1), 0.85 (0.83-0.86), 10.1 (8.67-13.6), 1.8 (1.6-2.2), 0.69 (0.58-0.85), and 0.72 (0.6-0.83), respectively. On the test dataset, the MyResUNet model obtained ACC, SEN, SPE, DICE, HD, HD95, ASD, ASSD(2, 3)with 95% confidence interval of 1 (1-1), 0.93 (0.91-0.95), 1 (1-1), 0.80 (0.79-0.81), 21.3 (11.7-53.9), 2.59 (2.33-3), 1 (0.96-1.21), and 0.92 (0.861-1), respectively. Both the code and trained models are publicly available. Conclusion: The proposed segmentation framework achieved strong performance on both the hold-out and independent test datasets. In the future, after further validation of the model’s generalization ability, it may be applied in real clinical settings for oral surgery planning.

Article activity feed