Explainable AI Framework for Anomaly Detection in Encrypted Network Traffic

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid expansion of encrypted network traffic has improved privacy but also complicated the task of identifying malicious behaviors hidden within protected communication streams. Traditional intrusion detection systems often struggle to interpret encrypted payloads, leading to reduced visibility and higher false-positive rates. This study proposes an Explainable Artificial Intelligence (XAI) framework designed to detect anomalies in encrypted network environments without compromising user privacy. The framework integrates flow-level behavioral features with a hybrid learning pipeline that combines deep representation models and interpretable machine-learning classifiers. To improve transparency, the system incorporates model-agnostic explanation tools such as SHAP and LIME, enabling security analysts to trace how specific traffic attributes contribute to detected anomalies. Experimental evaluations on contemporary encrypted traffic datasets demonstrate that the approach achieves high detection accuracy while offering interpretable outputs that support root-cause analysis. The findings highlight the potential of XAI-driven solutions to enhance trust, accountability, and operational effectiveness in modern security operations centers handling increasingly opaque network environments.

Article activity feed