SP-LID:Subtle Perturbation Sensitive Adversarial Example Detection Method Based on Local Intrinsic Dimension
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Computer vision models based on deep learning technology are vulnerable to adversarial examples. By adding some subtle perturbations to the examples, the attacker can make the deep learning model make mistakes, which will lead to serious consequences. In order to better defend against this attack, one of the methods is to detect and cull the adversarial examples. Compared with the original local intrinsic dimension detection method, this paper proposes an optimized local intrinsic dimension detection method to characterize the dimensional properties of adversarial examples. This method not only detects the distance distribution of a example to its neighbors, but also evaluates the sensitivity of a example to perturbations to determine whether it is an adversarial example. Four different adversarial attack strategies were used to evaluate the defense effect of the proposed method. The experimental results show that the improved local intrinsic dimension detection method is more effective than other defense methods, and plays a significant role in different data sets.