DR-MUSIC: Deep Learning-Based Reconstruction of ECG from Millimeter-Wave Radar for Contactless Cardiac Monitoring
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Cardiovascular diseases (CVDs) remain a leading global health issue, highlighting the need for reliable, continuous, and non-intrusive cardiac monitoring. Traditional electrocardiogram (ECG) monitoring requires contact-based electrodes, leading to discomfort, signal quality issues, and limited feasibility for long-term use. This study aims to address these limitations by proposing DR-MUSIC, a novel deep learning model capable of accurately reconstructing ECG waveforms from contactless millimeter-wave (mmWave) radar signals. DR-MUSIC leverages bidirectional long short-term memory (BiLSTM) networks to convert radar cardiograms (RCGs) into ECG-like waveforms. The model was trained and evaluated using the publicly available MMECG dataset, which includes synchronized radar and ECG recordings from 35 human participants under various physiological states: normal breathing, irregular breathing, post-exercise, and sleep. Over 3.2 million data samples were preprocessed using median filtering, normalization, and careful dimensional and computational resource management to prepare for training. The model demonstrated excellent prediction accuracy with a root mean square error (RMSE) of 0.0153, mean absolute error (MAE) of 0.0100, R-squared (R²) of 0.9780, and a signal-to-noise ratio (SNR) of 16.57 dB. Qualitative analyses further confirmed DR-MUSIC’s ability to preserve critical ECG features, including the P wave, QRS complex, and T wave morphology. DR-MUSIC represents a clinically relevant, contactless, and robust alternative to traditional ECG monitoring. Its high accuracy and comfort-enhancing features make it promising for continuous, home-based, clinical, and remote cardiac surveillance applications.