Deep Neural Architecture Combining Frequency and Attention Mechanisms for Cloud CPU Usage Prediction

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study addresses the key problem of CPU utilization prediction in cloud computing environments and proposes a time series modeling method based on an improved FedFormer. The research first analyzes the complexity of cloud workloads under high concurrency and dynamic fluctuations, and points out the limitations of traditional methods in handling nonlinear, non-stationary, and noisy data with insufficient accuracy and weak robustness. To address this issue, the proposed approach introduces an improved structure that integrates frequency-domain modeling with attention mechanisms. Global periodic patterns are extracted through the fast Fourier transform, while multi-head self-attention captures both local and long-range dependencies, enabling efficient modeling of CPU utilization sequences. A unified feature embedding and prediction framework is constructed, where raw time series data are transformed into high-dimensional representations at the input layer, and nonlinear mappings generate predictions in the output stage, optimized by the mean squared error loss function. In experimental design, the study conducts multidimensional validation, including hyperparameter sensitivity, environment sensitivity, and data sensitivity. The results show that the method maintains stable predictive performance in complex cloud computing scenarios and achieves higher accuracy and robustness than multiple baseline models. The findings enrich the methodological system of deep time series modeling in cloud computing and provide solid technical support for resource scheduling, load balancing, and performance optimization.

Article activity feed