Diffusion-DGCL: A Structure-Aware EEG Generation Framework via Graph Modeling and Marginal Distribution Alignment
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The construction of behavior recognition models for clinical purposes is hindered by the scarcity of available data, for which recent advances in diffusion-based data augmentation offer a promising solution. Nevertheless, existing approaches remain limited by inadequate spatial modeling of EEG signals, distributional discrepancies between synthetic and real data, and insufficient sensitivity to local signal variations. These limitations motivate the design of Diffusion-DGCL, an enhanced generative framework composed of three key modules: a graph convolutional module that captures spatial dependencies based on EEG electrode topology, a nonparametric calibration module that aligns marginal distributions across the temporal-channel space, and a gated local convolutional module that enhances sensitivity to transient signal patterns. Extensive experiments on the CHB-MIT seizure detection dataset show that Diffusion-DGCL achieves a substantial 86.7% reduction in the Context-FID score (from 0.015 to 0.002) compared to existing baselines and diffusion models. It also boosts the F1-score for seizure detection from 39.2% to 76.1% under the 5% real-data setting, demonstrating its strong potential for clinical time-series generation and low-sample behavior recognition.