papers AI Learner
The Github is limit! Click to go to the new site.

Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks

2019-03-08
Jose Oramas, Kaili Wang, Tinne Tuytelaars

Abstract

Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. In this paper, we propose a novel scheme for both interpretation as well as explanation in which, given a pretrained model, we automatically identify internal features relevant for the set of classes considered by the model, without relying on additional annotations. We interpret the model through average visualizations of this reduced set of features. Then, at test time, we explain the network prediction by accompanying the predicted class label with supporting visualizations derived from the identified features. In addition, we propose a method to address the artifacts introduced by stridded operations in deconvNet-based visualizations. Moreover, we introduce an8Flower, a dataset specifically designed for objective quantitative evaluation of methods for visual explanation.Experiments on the MNIST,ILSVRC12,Fashion144k and an8Flower datasets show that our method produces detailed explanations with good coverage of relevant features of the classes of interest

Abstract (translated by Google)
URL

http://arxiv.org/abs/1712.06302

PDF

http://arxiv.org/pdf/1712.06302


Similar Posts

Comments