Deep Learning Framework for Change-Point Detection in Cloud-Native Kubernetes Node Metrics Using Transformer Architecture

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study proposes a Transformer-based change-point detection method for modeling and anomaly detection of multidimensional time-series metrics in Kubernetes nodes. The research first analyzes the complexity and dynamics of node operating states in cloud-native environments and points out the limitations of traditional single-threshold or statistical methods when dealing with high-dimensional and non-stationary data. To address this, an input representation mechanism combining linear embedding and positional encoding is designed to preserve both multidimensional metric features and temporal order information. In the modeling stage, a multi-head self-attention mechanism is introduced to effectively capture global dependencies and cross-dimensional interactions. This enhances the model's sensitivity to complex patterns and potential change points. In the output stage, a differentiated scoring function and a normalized smoothing method are applied to evaluate the time series step by step. A change-point decision function based on intensity scores is then constructed, which significantly improves the ability to identify abnormal state transitions. Through validation on large-scale distributed system metric data, the proposed method outperforms existing approaches in AUC, ACC, F1-Score, and Recall. It demonstrates higher accuracy, robustness, and stability. Overall, the framework not only extends attention-based time-series modeling at the theoretical level but also provides strong support for intelligent monitoring and resource optimization in cloud-native environments at the practical level.

Article activity feed