Deep Learning-Based Classification of Dysphagia Severity Using M-Mode Ultrasound Imaging

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Oropharyngeal dysphagia is a common complication of neurological disorders associated with aspiration pneumonia, malnutrition, and increased mortality. Current diagnostic methods are limited by radiation exposure, subjective interpretation, and variable reliability, while ultrasound provides a portable, radiation-free alternative for bedside assessment. This study developed a deep learning (DL) model that automatically classifies dysphagia severity using specialized ultrasound images. A total of 355 ultrasound examinations from 249 patients with clinically suspected dysphagia were collected and analyzed. Dysphagia status was classified as mild or severe based on dietary intervention requirements. Three approaches were compared to identify the most effective method, including image-only, feature-only, and multimodal strategies. The multimodal model combined images with quantitative features, achieving superior classification performance compared to using images or quantitative features alone. Model analysis highlighted key quantitative features predictive of dysphagia severity, improving the system's clinical interpretability and relevance. This combined approach automated the diagnostic process, eliminating the need for manual measurements while preserving clinical understanding. This study demonstrates that ultrasound-based artificial intelligence can provide an objective, interpretable, and radiation-free tool for automated dysphagia screening. The system has potential to facilitate early detection, support clinical decision-making, and improve patient care across diverse healthcare settings.

Article activity feed