papers AI Learner
The Github is limit! Click to go to the new site.

Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks

2019-01-08
Zenan Ling, Haotian Ma, Yu Yang, Robert C. Qiu, Song-Chun Zhu, Quanshi Zhang

Abstract

In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network. We use our method to explain the gaming strategy of the alphaGo Zero model. Unlike previous studies that visualized image appearances corresponding to the network output or a neural activation only from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output. The analysis of local contextual effects w.r.t. certain input units is of special values in real applications. Explaining the logic of the alphaGo Zero model is a typical application. In experiments, our method successfully disentangled the rationale of each move during the Go game.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.02184

PDF

http://arxiv.org/pdf/1901.02184


Similar Posts

Comments