A Comparative Analysis of Statistical Anomaly Detection Methods for Cloud Service Monitoring: A Simulation-Based Evaluation Framework
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Cloud service reliability depends critically on effective anomaly detection in system metrics. This paper presents a comprehensive simulation-based evaluation framework for comparing statistical anomaly detection algorithms in cloud environments. We implement and evaluate four statistical detection methods (Z-Score, Modified Z-Score, EWMA, and Threshold-based) across four key cloud metrics (CPU usage, memory usage, network I/O, and response time) using five distinct anomaly patterns (spike, dip, level shift, trend change, and collective anomalies). Our experimental results reveal that Threshold-based detection achieves the highest overall F1-score (0.142), while Modified Z-Score detection demonstrates superior precision (0.209). The study provides empirical insights into algorithm performance trade-offs and introduces a reusable simulation framework for systematic evaluation of anomaly detection methods in cloud computing contexts.