Self-Supervised Transfer Learning with Shared Encoders for Cross-Domain Cloud Optimization

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper focuses on the optimization of cross-domain cloud computing tasks and proposes a unified modeling method that integrates transfer learning and self-supervised learning. The study first addresses the data distribution differences between the source and target domains by introducing a feature alignment mechanism. Cross-domain feature space consistency is achieved through a shared encoder and maximum mean discrepancy constraint. At the same time, self-supervised learning constructs proxy tasks on unlabeled data. This enhances the model's ability to capture latent structures and temporal patterns, which effectively improves the robustness of feature representation. In the proposed framework, task loss, self-supervised loss, and distribution alignment loss are jointly optimized. This forms an optimization objective that balances accuracy and stability. To verify the effectiveness of the method, multiple sensitivity experiments are designed. These experiments examine the influence of hyperparameters, task load intensity, distribution differences, and noise ratios on model performance. The results show that the method achieves superior performance on key metrics such as Domain Adaptation Accuracy, MMD, and H-Score. It maintains strong generalization ability and stability in complex and dynamic environments. Overall, the proposed method not only improves adaptation in cross-domain tasks but also provides new insights for resource scheduling and intelligent optimization in cloud computing environments.

Article activity feed