Detection of Attacks with An Adversarial Machine Learning Approach

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Machine learning methods are widely used in various domains, and the analysis of attacks is no exception. Various types of attacks occur daily. Therefore, examining each of them by human experts is becoming increasingly difficult due to the limited number of experts compared to the increasing number of attacks and the possibility of human error in detecting attacks, making it a tedious and almost impossible task. In recent years, significant efforts have been made to design a machine learning model or deep learning for intrusion detection. These models have been built with different accuracies using machine learning algorithms such as RF, SVM, Decision tree, Logistic Regression, Naive Bayes, DNN, ANN, CNN, RNN, LSTM, and GRU. Groups have created various models with different accuracies using machine learning or deep learning. In all cases, a good level of accuracy has been achieved, but none of them have exposed their model to attacks to evaluate their model's ability. In other words, none of them have subjected their designed model to attacks to assess their model's own capabilities. The aim of this research is to propose a method to improve the intrusion detection results using machine learning methods. Machine learning methods are continuously evolving and are constantly being replaced by methods that have better performance, processing power, efficiency, and accuracy. In our proposed method, in addition to building an acceptable model with good accuracy, we attack our model using adversarial attack methods. GAN neural networks, as one of the frameworks suitable for applying adversarial attacks, consist of generative models that produce new data similar to the training data.

Article activity feed