An Optimal Fusion Strategy for Automated Appendiceal Ultrasound Diagnosis and Reporting

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background Ultrasound diagnosis of appendicitis is challenged by operator-dependent variability, leading to inconsistent interpretation and reporting. Automated approaches leveraging deep learning have potential to improve diagnostic accuracy and standardize reporting but often lack integration of multiple relevant diagnostic features. Methods We developed the Full Model Selection and Fusion System (FMS), an automated framework that identifies nine critical diagnostic features from appendiceal ultrasound images. FMS integrates feature-specific deep learning models trained on 10,445 annotated images and uses an independent stratified validation set of 184 cases to optimally select and fuse the best-performing model snapshots for each feature. The system was evaluated on a test cohort of 3,214 pathologically confirmed cases collected from 2019 to 2023. Results FMS significantly reduced the rate of unacceptable reports—defined as discrepancies in four or more features compared to ground truth—to 16.6% (95% CI, 15.2–18.1%), versus 35.4% (95% CI, 33.4–37.5%) for conventional early stopping and 32.6% (95% CI, 30.6–34.6%) for non-selective fusion (P < 0.0001). This modular fusion strategy effectively mitigates operator variability and improves diagnostic consistency in appendiceal ultrasound imaging. Conclusions Our study demonstrates that a modular, feature-specific deep learning fusion framework can enhance the accuracy and consistency of automated appendiceal ultrasound diagnosis and reporting. FMS offers a scalable solution to overcome operator variability, facilitating reliable clinical decision support in complex ultrasound diagnostics.

Article activity feed