papers AI Learner
The Github is limit! Click to go to the new site.

Beyond Explainability: Leveraging Interpretability for Improved Adversarial Learning

2019-04-21
Devinder Kumar, Ibrahim Ben-Daya, Kanav Vats, Jeffery Feng, Graham Taylor and, Alexander Wong

Abstract

In this study, we propose the leveraging of interpretability for tasks beyond purely the purpose of explainability. In particular, this study puts forward a novel strategy for leveraging gradient-based interpretability in the realm of adversarial examples, where we use insights gained to aid adversarial learning. More specifically, we introduce the concept of spatially constrained one-pixel adversarial perturbations, where we guide the learning of such adversarial perturbations towards more susceptible areas identified via gradient-based interpretability. Experimental results using different benchmark datasets show that such a spatially constrained one-pixel adversarial perturbation strategy can noticeably improve the speed of convergence as well as produce successful attacks that were also visually difficult to perceive, thus illustrating an effective use of interpretability methods for tasks outside of the purpose of purely explainability.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.09633

PDF

http://arxiv.org/pdf/1904.09633


Similar Posts

Comments