A Comparative Study of Predictive model (ECG Buddy) and ChatGPT-4o for Myocardial Infarction Diagnosis via ECG image Analysis: Performance, Accuracy, and Clinical Feasibility

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background

Accurate and timely electrocardiogram (ECG) interpretation is critical for diagnosing myocardial infarction (MI) in emergency settings. Recent advances in multimodal Large Language Models (LLMs), such as Chat Generative Pre-trained Transformer (ChatGPT), have shown promise in clinical interpretation for medical imaging. However, whether these models analyze waveform patterns or simply rely on text cues remains unclear, underscoring the need for direct comparisons with dedicated ECG artificial intelligence (AI) tools.

Methods

This retrospective study evaluated and compared AI models for classifying MI using a publicly available 12-lead ECG dataset from Pakistan, categorizing cases into MI-positive (239 images) and MI-negative (689 images). ChatGPT (GPT-4o, version 2024-11-20) was queried with five MI confidence options, whereas ECG Buddy for Windows analyzed the images based on ST- elevation MI, acute coronary syndrome, and myocardial injury biomarkers.

Results

Among 928 ECG recordings (25.8% MI-positive), ChatGPT achieved an accuracy of 65.95% (95% confidence interval [CI]: 62.80–69.00), area under the curve (AUC) of 57.34% (95% CI: 53.44–61.24), sensitivity of 36.40% (95% CI: 30.30–42.85), and specificity of 76.20% (95% CI: 72.84–79.33). However, ECG Buddy reached an accuracy of 96.98% (95% CI: 95.67–97.99), AUC of 98.8% (95% CI: 98.3–99.43), sensitivity of 96.65% (95% CI: 93.51–98.54), and specificity of 97.10% (95% CI: 95.55–98.22). DeLong’s test confirmed that ECG Buddy significantly outperformed ChatGPT (all P < .001). In an error analysis of 40 cases, ChatGPT provided clinically plausible explanations in only 7.5% of cases, whereas 35% were partially correct, 40% were completely incorrect, and 17.5% received no meaningful explanation.

Conclusion

LLMs such as ChatGPT underperform relative to specialized tools such as ECG Buddy in ECG image-based MI diagnosis. Further training may improve ChatGPT; however, domain- specific AI remains essential for clinical accuracy. The high performance of ECG Buddy underscores the importance of specialized models for achieving reliable and robust diagnostic outcomes.

Article activity feed