papers AI Learner
The Github is limit! Click to go to the new site.

Saliency Learning: Teaching the Model Where to Pay Attention

2019-02-22
Reza Ghaeini, Xiaoli Z. Fern, Hamed Shahbazi, Prasad Tadepalli

Abstract

Deep learning has emerged as a compelling solution to many NLP tasks with remarkable performances. However, due to their opacity, such models are hard to interpret and trust. Recent work on explaining deep models has introduced approaches to provide insights toward the model’s behavior and predictions, which are helpful for determining the reliability of the model’s prediction. However, such methods do not fix and improve the model’s reliability. In this paper, we teach our models to make the right prediction for the right reason by providing explanation training signal and ensuring alignment of the models explanation with the ground truth explanation. Our experimental results on multiple tasks and datasets demonstrate the effectiveness of the proposed method, which produces more reliable predictions while delivering better results compared to traditionally trained models.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.08649

PDF

http://arxiv.org/pdf/1902.08649


Similar Posts

Comments