papers AI Learner
The Github is limit! Click to go to the new site.

Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded

2019-02-11
Ramprasaath R. Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Dhruv Batra, Devi Parikh

Abstract

Many vision and language models suffer from poor visual grounding - often falling back on easy-to-learn language priors rather than associating language with visual concepts. In this work, we propose a generic framework which we call Human Importance-aware Network Tuning (HINT) that effectively leverages human supervision to improve visual grounding. HINT constrains deep networks to be sensitive to the same input regions as humans. Crucially, our approach optimizes the alignment between human attention maps and gradient-based network importances - ensuring that models learn not just to look at but rather rely on visual concepts that humans found relevant for a task when making predictions. We demonstrate our approach on Visual Question Answering and Image Captioning tasks, achieving state of-the-art for the VQA-CP dataset which penalizes over-reliance on language priors.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.03751

PDF

http://arxiv.org/pdf/1902.03751


Similar Posts

Comments