papers AI Learner
The Github is limit! Click to go to the new site.

Transfer Learning via Unsupervised Task Discovery for Visual Question Answering

2019-04-07
Hyeonwoo Noh, Taehoon Kim, Jonghwan Mun, Bohyung Han

Abstract

We study how to leverage off-the-shelf visual and linguistic data to cope with out-of-vocabulary answers in visual question answering task. Existing large-scale visual datasets with annotations such as image class labels, bounding boxes and region descriptions are good sources for learning rich and diverse visual concepts. However, it is not straightforward how the visual concepts can be captured and transferred to visual question answering models due to missing link between question dependent answering models and visual data without question. We tackle this problem in two steps: 1) learning a task conditional visual classifier, which is capable of solving diverse question-specific visual recognition tasks, based on unsupervised task discovery and 2) transferring the task conditional visual classifier to visual question answering models. Specifically, we employ linguistic knowledge sources such as structured lexical database (e.g. WordNet) and visual descriptions for unsupervised task discovery, and transfer a learned task conditional visual classifier as an answering unit in a visual question answering model. We empirically show that the proposed algorithm generalizes to out-of-vocabulary answers successfully using the knowledge transferred from the visual dataset.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1810.02358

PDF

http://arxiv.org/pdf/1810.02358


Similar Posts

Comments