papers AI Learner
The Github is limit! Click to go to the new site.

LS-Tree: Model Interpretation When the Data Are Linguistic

2019-02-11
Jianbo Chen, Michael I. Jordan

Abstract

We study the problem of interpreting trained classification models in the setting of linguistic data sets. Leveraging a parse tree, we propose to assign least-squares based importance scores to each word of an instance by exploiting syntactic constituency structure. We establish an axiomatic characterization of these importance scores by relating them to the Banzhaf value in coalitional game theory. Based on these importance scores, we develop a principled method for detecting and quantifying interactions between words in a sentence. We demonstrate that the proposed method can aid in interpretability and diagnostics for several widely-used language models.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.04187

PDF

http://arxiv.org/pdf/1902.04187


Similar Posts

Comments