Early Diagnosis Opportunities in Neonatal Transient Tachypnea with Electrocardiogram and Machine Learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Objective: This study explores the utility of electrocardiogram parameters in conjunction with machine learning models for the early diagnosis of neonatal transient tachypnea (TTN). TTN is a common cause of respiratory distress in neonatal intensive care units, and early diagnosis has the potential to reduce invasive interventions and shorten hospital stays. Methods: The study retrospectively examined data from 101 neonates diagnosed with TTN and 82 healthy neonates, utilizing parameters such as P, QRS, T angles, and frontal QRS-T angle obtained from ECG. Results: Decision Tree, Neural Network, Random Forest, Boosting, and Support Vector Machine models were utilized among the machine learning algorithms. The dataset was split into 65% for training, 20% for validation, and 15% for testing. According to the findings, the Random Forest classification model demonstrated superior performance compared to other models, achieving 71.4% test accuracy, an average AUC value of 0.790, and a Matthews Correlation Coefficient of 0.443. The MCC value indicated that the Random Forest model possesses reliable predictive power even with imbalanced datasets. Notably, ECG parameters such as PR interval, V2 T voltage, and SV1 voltage were identified as the most significant features influencing the model's predictive performance. Conclusions: These findings suggest that ECG-based machine learning models can enhance clinical decision-making by facilitating non-invasive, rapid, and accurate diagnosis of TTN. Such artificial intelligence-driven systems hold the potential to mitigate unnecessary interventions, expedite treatment initiation, and improve neonatal prognoses. Future efforts should focus on enhancing model interpretability through the incorporation of explainable AI methodologies to facilitate their seamless integration into clinical practice.

Article activity feed