papers AI Learner
The Github is limit! Click to go to the new site.

Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch

2019-04-22
João Monteiro, Isabela Albuquerque, Zahid Akhtar, Tiago H. Falk

Abstract

Modern applications of artificial neural networks have yielded remarkable performance gains in a wide range of tasks. However, recent studies have discovered that such modelling strategy is vulnerable to Adversarial Examples, i.e. examples with subtle perturbations often too small and imperceptible to humans, but that can easily fool neural networks. Defense techniques against adversarial examples have been proposed, but ensuring robust performance against varying or novel types of attacks remains an open problem. In this work, we focus on the detection setting, in which case attackers become identifiable while models remain vulnerable. Particularly, we employ the decision layer of independently trained models as features for posterior detection. The proposed framework does not require any prior knowledge of adversarial examples generation techniques, and can be directly employed along with unmodified off-the-shelf models. Experiments on the standard MNIST and CIFAR10 datasets deliver empirical evidence that such detection approach generalizes well across not only different adversarial examples generation methods but also quality degradation attacks. Non-linear binary classifiers trained on top of our proposed features can achieve a high detection rate (>90%) in a set of white-box attacks and maintain such performance when tested against unseen attacks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1802.07770

PDF

http://arxiv.org/pdf/1802.07770


Similar Posts

Comments