papers AI Learner
The Github is limit! Click to go to the new site.

Aggregating explainability methods for neural networks stabilizes explanations

2019-03-01
Laura Rieger, Lars Kai Hansen

Abstract

Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation. In fact, most works rely on manually assessing the explanation to evaluate the quality of a method. This injects uncertainty in the explanation process along several dimensions: Which explanation method to apply? Who should we ask to evaluate it and which criteria should be used for the evaluation? Our contributions in this paper are twofold. First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation. Our findings show that the aggregation is more robust, well-aligned with human explanations and can attribute relevance to a broader set of features (completeness). Second, we propose a novel way of evaluating explanation methods that circumvents the need for manual evaluation and is not reliant on the alignment of neural networks and humans decision processes.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.00519

PDF

http://arxiv.org/pdf/1903.00519


Similar Posts

Comments