Federated XAI IDS: An Explainable and Safeguarding privacy Approach to Detect Intrusion Combining Federated Learning and SHAP
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Intrusion Detection Systems (IDS) are crucial module of cybersecurity which is designed to identify unauthorized activities in network environments. Traditional IDS, on the other hand, have a number of problems, such as high rates of inaccurate positives and inaccurate negatives and a lack of explainability that makes it difficult to provide adequate protection. Furthermore, centralized IDS approaches have issues with interpretability and data protection, especially when dealing with sensitive data. In order to overcome these drawbacks, we provide Federated XAI IDS, a brand-new explainable and privacy-preserving IDS that improves security and interpretability by fusing Federated Learning (FL) with Shapley Additive Explanations (SHAP). Our approach enables IDS models to be collaboratively trained across multiple decentralized devices while ensuring that local data remains securely on edge nodes, thus mitigating privacy risks. The Artificial Neural Network (ANN)-based IDS is distributed across four clients in a federated setup using the CICIoT2023 dataset, with model aggregation performed via FedAvg. The proposed method demonstrated efficacy in intrusion detection, achieving 88.4% training and 88.2% testing accuracy. Furthermore, SHAP was utilized to analyze feature importance, providing a deeper comprehension of the critical attributes influencing model predictions. Transparency is improved and the model becomes more dependable and interpretable thanks to the feature importance ranking that SHAP produces. Our findings demonstrate how well Federated XAI IDS handles the two problems of explainability and privacy in intrusion detection. This dissertation accelerates the major establishment in the creation of safe, interpretable, and decentralized intrusion detection systems (IDS) for contemporary cybersecurity applications by utilizing federated learning and explainable AI (XAI).