Service Mesh–Enabled Resource Orchestration for Latency-Aware Microservices Deployment Across Edge–Cloud Continuums
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The proliferation of latency-sensitive applications such as autonomous driving, industrial IoT, and augmented reality necessitates the strategic distribution of microservices across the edge-cloud continuum. However, managing the deployment, discovery, and resilient communication of these distributed components presents significant challenges. Traditional orchestration platforms like Kubernetes primarily manage container lifecycles within clustered domains but lack the intrinsic intelligence for global, latency-aware scheduling and dynamic traffic control across heterogeneous, geographically dispersed infrastructures. This paper proposes a novel, service mesh-enabled resource orchestration framework designed to optimize microservices deployment for stringent latency requirements. By integrating a decentralized service mesh’s fine-grained traffic management and observability capabilities with a centralized orchestrator’s scheduling logic, the framework enables intelligent, context-aware placement and dynamic request routing. A two-tiered scheduling algorithm is introduced, where the orchestrator performs initial cost-latency optimized placement, and the service mesh executes real-time, latency-driven traffic steering based on observed network conditions. Simulation-based evaluations using a custom edge-cloud continuum testbed demonstrate that the proposed framework reduces tail latency (95th percentile) by up to 41% and improves request success rates by 23% under volatile network conditions, compared to baseline Kubernetes scheduling. The results underscore the critical role of a tightly coupled orchestration and service mesh paradigm in realizing the performance promises of pervasive edge computing.