papers AI Learner
The Github is limit! Click to go to the new site.

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

2019-02-26
Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

Abstract

Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly “intelligent” behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to well-informed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.10178

PDF

http://arxiv.org/pdf/1902.10178


Similar Posts

Comments