Model Explainability Techniques for Deep Autoencoders in Encrypted Traffic Analysis

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Deep autoencoders have emerged as a powerful tool for encrypted traffic analysis, enabling the detection of anomalous patterns without requiring the decryption of sensitive data. However, the complexity of these models often obscures the rationale behind their predictions, limiting their interpretability and practical adoption in security-critical environments. This study explores model explainability techniques tailored for deep autoencoders in the context of encrypted traffic analysis. We examine methods such as feature attribution, layer-wise relevance propagation, and SHAP-based interpretation to elucidate how latent representations contribute to anomaly detection. The findings demonstrate that explainable models not only maintain high detection performance but also provide actionable insights for network security analysts, enhancing trust, transparency, and accountability in automated traffic monitoring systems. Our work bridges the gap between deep learning efficacy and interpretability in encrypted traffic analytics, offering a framework for deploying explainable, high-performance models in real-world cybersecurity settings.

Article activity feed