Deep Learning for Heart Sound Abnormality of Infants: Proof-of-Concept Study of 1D and 2D Representations
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Introduction: Advanced identification and intervention for Congenital Heart Defects (CHDs) in pediatric populations are crucial, as approximately 1% of neonates worldwide present with these conditions. Traditional methods of diagnosing CHDs often rely on stethoscope auscultation, which heavily depends on the clinician’s expertise and may lead to the oversight of subtle acoustic indicators. Objectives: This study introduces an innovative deep-learning framework designed for the early diagnosis of congenital heart disease. It utilizes time-series data obtained from cardiac auditory signals captured through stethoscopes. Methods: The audio signals were processed into time–frequency representations using Mel-Frequency Cepstral Coefficients (MFCCs). The architecture of the model combines Convolutional Neural Networks (CNNs) for effective feature extraction with Long Short-Term Memory (LSTM) networks to accurately model temporal dependencies. Impressively, the model achieved an accuracy of 98.91% in the early detection of CHDs. Results: While traditional diagnostic tools like Electrocardiograms (ECG) and Phonocardiograms (PCG) remain indispensable for confirming diagnoses, many AI studies have primarily targeted ECG and PCG datasets. This approach emphasizes the potential of cardiac acoustics for the early diagnosis of CHDs, which could lead to improved clinical outcomes for infants. Notably, the dataset used in this research is publicly available, enabling wider application and model training within the research community.