TGCformer: A Transformer-Based Spatiotemporal Fusion Framework for Power Load Anomaly Detection

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Existing methods for power load anomaly detection suffer from several limitations, including insufficient extraction of multi-scale temporal features, difficulty in capturing long-range dependencies, and inefficient fusion of heterogeneous spatiotemporal information. To address these issues, this study proposes the TGCformer, an enhanced Transformer-based model designed for dynamic spatiotemporal feature fusion. First, a dual-path spatiotemporal feature extraction module is constructed. The temporal path utilizes TSFresh to enhance the explicit pattern representation of the load sequences, while the spatial path employs an improved GATv2 to model dynamic correlations among grid nodes. Together, these two paths provide more interpretable and structured inputs for the Transformer encoder. Subsequently, a multi-head cross-attention mechanism is designed, where temporal features serve as the Query and graph embeddings as the Key and Value, to guide the feature fusion process. This design ensures the effective integration of complementary information while suppressing noise. Experimental results on the public Irish dataset demonstrate the effectiveness of the proposed model. Specifically, TGCformer achieves average F1-score improvements of 0.35 and 0.53 compared with InceptionTime and XceptionTime, respectively.

Article activity feed