Evaluating Latency and Infrastructure Trade-offs in Serverless Computing
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Serverless computing reduces management overhead and scales automatically, yet its latency and performance differ significantly across cloud platforms. This study evaluates serverless function execution and infrastructure-level characteristics on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). We benchmark function invocation latency in edge deployments, conduct regional latency comparisons across AWS and Azure, and use PerfKit Benchmarker to measure network throughput, network latency, and storage I/O on AWS and GCP. Results show that Azure recorded the lowest minimum latency (555 ms) but suffered from high tail delays, while GCP delivered the most consistent average performance (1.14 s). AWS exhibited moderate latency with a relatively stable distribution. At the infrastructure level, GCP nearly doubled single-stream throughput (9.35 Gb/s) compared to AWS (4.97 Gb/s), whereas AWS achieved slightly lower round-trip network latency (60 vs. 67 $\mu$s). Storage I/O performance was nearly identical across both providers. These findings link infrastructure characteristics to observed serverless behavior, providing actionable insights for latency-sensitive and multi-cloud application deployments.