Time Series Prediction of Backend Server Load via Deep Learning and Attention Mechanisms

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study proposes a deep learning-based time series modeling method for backend server load prediction, aiming to address the limitations of traditional approaches in handling complex nonlinear features and multidimensional dependencies. The method begins with normalization and outlier processing of server operation data to ensure stability and consistency, followed by feature transformation through linear mapping and embedding layers, and constructs a dynamic modeling framework with attention mechanisms to capture both local short-term fluctuations and global long-term trends. A decoding structure generates the prediction output, trained with mean squared error optimization. Experiments are conducted on large-scale public cluster logs, including CPU, memory, disk I/O, and network traffic, with comparative analysis across multiple models using metrics such as MSE, RMSE, MAE, and R². Results show that the proposed method achieves superior accuracy and robustness compared with baseline methods, effectively reflecting real server behavior. Further sensitivity experiments on hyperparameters, environment conditions, and data perturbations validate the adaptability and robustness of the model under varying constraints. Overall, the approach enriches theoretical perspectives on load prediction and demonstrates strong practical potential for enhancing system performance and resource management efficiency.

Article activity feed