A Federated Weighted Learning Algorithm against Poisoning Attacks
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The emergence of Federated Learning (FL) has provided a promising framework for distributed machine learning, where the probability of privacy leakage is minimized. However, the existing FL protocol is vulnerable to malicious poisoning attacks, thus affecting data privacy. To address this issue, Federated Weighted Learning Algorithm (FWLA) is introduced. In FWLA, the weight of each client is self-adjusted and optimized using asynchronous method and residual testing method during updating process. Each client uploads parameters independently in designed asynchronous training. Experiments show that the proposed framework can achieve at least 97.8% accuracy and at most 3.6% false acceptance rate for the CICIDS2017, UNSW-NB15 and NSL-KDD datasets, which reflects its state-of-the-art performance. Furthermore, when noise data exist in the training dataset, FWLA can also reduce the decline of accuracy, which ensures the robustness of federated learning.