Multi-Modal Hierarchical Spatio-Temporal Network with Gradient-Boosting Integration for Cloud Resource Prediction
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Resource prediction in heterogeneous cloud environments is hard because of diverse node configurations, monitoring metrics, execution logs, and injected failures. Neural models can capture temporal patterns but miss sparse discrete features. Tree-based models can model categorical data but cannot handle complex spatio-temporal patterns. We propose MHST-GB, a Multi-Modal Hierarchical Spatio-Temporal Ensemble Network with Gradient Boosting Integration. The framework uses modality-specific neural encoders with correlation-guided attention for fusion. It combines a dual-path design of deep spatio-temporal networks and LightGBM for complementary feature spaces. It also adds a feedback-driven training method that adjusts attention weights based on feature importance. With curriculum learning, FMSE loss, Mixup, and DropConnect, MHST-GB gains robustness and generalization for accurate multi-resource prediction in heterogeneous cloud environments.