Automatic Classification and Acoustic Auscultation of Heart, Lung, and Bowel Sounds Using Artificial Intelligence

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Auscultation of heart, lung, and bowel sounds remains a fundamental diagnostic technique in clinical practice despite significant technological advancements in medical imaging. However, the accuracy of auscultation-based diagnoses is highly dependent on clinician experience and expertise, leading to potential diagnostic inconsistencies. The objective of this study is to present a novel artificial intelligence (AI) framework for the automatic classification and acoustic differentiation of heart, lung, and bowel sounds, addressing the need for objective, reproducible diagnostic support tools. Our approach leverages recent advances in supervised machine learning and signal processing to extract distinctive acoustic signatures from publicly available, digitized heart, lung, and bowel sounds. By analyzing spectral, temporal, and morphological features across diverse asymptomatic populations, the algorithm achieves excellent classification of predictive accuracy (65.00–91.67%) and validation accuracy (83.87–94.62%) from six AI models. The clinical implications of this algorithm show promise beyond diagnostic support to applications in medical education, telemedicine, and continuous patient monitoring. This work contributes to emerging AI-assisted auscultation by providing a comprehensive framework for multi-organ sound classification with the potential to improve differential diagnostic accuracy and standardization in clinical settings.

Article activity feed