Explainable Deep Learning Models for Detecting Suspicious Behavior in Enterprise Networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As technology changes quickly, business networks are becoming more open to many types of cyber risks, such as insider attacks, data breaches, and unauthorized access. Finding strange behaviour in these networks is very important for keeping organizational systems safe and secure. When it comes to the complexity and amount of data in current network settings, rule-based methods for finding anomalies don’t always work well. Explainable deep learning models, in particular, look like a good way to solve the problem because they can accurately find network behaviour that isn’t normal while keeping model choices clear and easy to understand. This essay looks into how explainable deep learning (XAI) models can be used to find strange behaviour in business networks. The main goal is to make network security systems more open by combining AI methods that can be explained with deep learning frameworks. This way, security experts will be able to understand and trust the model’s choices. The suggested framework uses advanced deep learning techniques like convolutional neural networks (CNNs) and recurrent neural networks (RNNs), along with AI methods that can be explained, such as SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), to give clear and useful information about model predictions. We check how well this method works by looking at how well XAI-based deep learning models and old-fashioned spotting methods work on a big set of network traffic data. The results show that the explainable models not only have higher accuracy and detection rates, but they also give data that can be understood.

Article activity feed