Learning a More Expressive Ensemble with Alternate Propagating Strategy for Enhancing Robustness

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Neural Ordinary Differential Equations (NODEs) have garnered significant attention due to their ability to achieve memory savings and to model continuous data. However, NODE-based models suffer from undesirable impacts of adversarial attacks and perturbations, i.e., various types of noise injection, resulting in severe performance bottlenecks and degradations. Herein, we propose a novel improved approach, called Alternate Propagating neural Ordinary Differential Equations (APODE), to tackle the vulnerability of NODEs. Our APODE is proposed to learn representations by an alternate propagating strategy for traditional NODEs, which verifies the robustness of cooperative modelling with more than one dynamic function. The proposed APODE trains an ensemble within two dynamic functions to model the derivative of representations, resulting in alleviating the vulnerability issue and improving the robustness of NODEs. Unlike other ODE-based models, APODE is a simple method that aims at efficiently enhancing robustness against input perturbations and adversarial attacks. Empirical experiments on a wide variety of tasks demonstrate the superiority of our APODE over baseline models in terms of robustness and expressive capacity with a performance improvement of 9.9%~21.6% under adversarial attacks in Cifar10 dataset.

Article activity feed