Deep Learning for Heart Sound Abnormality of Infants: Proof of Conceptual Study of 1D and 2D Representations
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Timely diagnosis and treatment of Congenital Heart Defects (CHDs) in pediatric patients are critical, as approximately 1% of neonates worldwide are affected by these anomalies. Traditional stethoscope auscultation relies on the clinician’s skills, which can lead to missed subtle symptoms. This study introduces a deep-learning framework for the early diagnosis of congenital heart disease, utilizing time-series data from cardiac auditory signals captured via stethoscopes. The audio signals were transformed using Mel Frequency Cepstral Coefficients (MFCC) to create time-frequency representations. Our novel architecture combines Convolutional Neural Networks (CNN) for feature extraction with Long Short-Term Memory (LSTM) networks to capture temporal dependencies. This model achieved an impressive accuracy of 98.91% in early disease detection. While methods such as Electrocardiograms (ECG) and Phonocardiograms (PCG) are necessary for confirming diagnoses, previous AI-driven studies have largely focused on ECG and PCG datasets. Our approach emphasizes the potential of using cardiac acoustics for the early diagnosis of CHDs, enhancing clinical outcomes for infants.