papers AI Learner
The Github is limit! Click to go to the new site.

Analyzing the Interpretability Robustness of Self-Explaining Models

2019-05-27
Haizhong Zheng, Earlence Fernandes, Atul Prakash

Abstract

Recently, interpretable models called self-explaining models (SEMs) have been proposed with the goal of providing interpretability robustness. We evaluate the interpretability robustness of SEMs and show that explanations provided by SEMs as currently proposed are not robust to adversarial inputs. Specifically, we successfully created adversarial inputs that do not change the model outputs but cause significant changes in the explanations. We find that even though current SEMs use stable co-efficients for mapping explanations to output labels, they do not consider the robustness of the first stage of the model that creates interpretable basis concepts from the input, leading to non-robust explanations. Our work makes a case for future work to start examining how to generate interpretable basis concepts in a robust way.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.12429

PDF

http://arxiv.org/pdf/1905.12429


Similar Posts

Comments