papers AI Learner
The Github is limit! Click to go to the new site.

Interpreting Adversarial Examples with Attributes

2019-04-17
Sadaf Gulshad, Jan Hendrik Metzen, Arnold Smeulders, Zeynep Akata

Abstract

Deep computer vision systems being vulnerable to imperceptible and carefully crafted noise have raised questions regarding the robustness of their decisions. We take a step back and approach this problem from an orthogonal direction. We propose to enable black-box neural networks to justify their reasoning both for clean and for adversarial examples by leveraging attributes, i.e. visually discriminative properties of objects. We rank attributes based on their class relevance, i.e. how the classification decision changes when the input is visually slightly perturbed, as well as image relevance, i.e. how well the attributes can be localized on both clean and perturbed images. We present comprehensive experiments for attribute prediction, adversarial example generation, adversarially robust learning, and their qualitative and quantitative analysis using predicted attributes on three benchmark datasets.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.08279

PDF

http://arxiv.org/pdf/1904.08279


Similar Posts

Comments