Mitigating Adversarial Attacks Uncertainty Through Interval Analysis
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The adversarial attack is characterized by a high attack success rate and a fast generation of examples. It is widely used in neural network robustness evaluation and adversarial training. Restricted by the randomness of the initialization of the attack point and the iterative finding algorithm can not guarantee that it can reach the global optimal solution, the existing adversarial attack methods have attack uncertainty in a single attack and need to increase the number of attacks in order to improve the attack success rate. This paper defines the label susceptibility to analyze the attack effect. For adversarial data with high label susceptibility, using interval analysis to find the adversarial examples in its neighbourhood can effectively alleviate the attack uncertainty and improve the attack success rate. Experimental results on multiple datasets show that for white-box and black-box attack methods, our method achieves attack success rates that can surpass those attained by baseline methods requiring significantly more attack attempts while maintaining superior computational efficiency.