Improving Adversarial Robustness of DNNs via Margin-Based Label Encoding

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Deep learning systems are known to be vulnerable to adversarial attacks. The attacker fools deep learning models with adversarial examples crafted by adding perturbations to the original ones. Although many defense methods have been proposed to increase the robustness of deep learning systems, the defensive effectiveness is not satisfactory yet. For example, the trade-off between robustness and accuracy still exists. Current research validates that label encoding is a promising approach for tackling this challenge. Nevertheless, existing label-encoding techniques have not been specifically developed to enhance adversarial robustness and offer limited inspiration for solving adversarial issue. This paper explores a new perspective on influencing the robustness of deep learning systems and further proposes a margin-based label-encoding strategy. Specifically, we define two margin-based label-encoding classification systems: margin-based binary-label classification and margin-based interval-label classification. In these systems, margins are designed between different labels to increase the cost of adversarial attacks. Furthermore, when a sample is correctly classified, the loss in margin-based label-encoding classification systems becomes zero—this mechanism helps mitigate the overfitting issue. Both theoretical analysis and experimental results demonstrate that our margin-based label-encoding methods are notably robust by comparing with the vanilla one-hot methods with retaining the classification system accuracy on legitimate examples\footnote{The previous version has been accepted by ICML 2021 Workshop on Adversarial Machine Learning (oral, please see \url{https://openreview.net/forum?id=uK2CQwaV77}}.

Article activity feed