papers AI Learner
The Github is limit! Click to go to the new site.

Incorporating External Knowledge to Answer Open-Domain Visual Questions with Dynamic Memory Networks

2017-12-03
Guohao Li, Hang Su, Wenwu Zhu

Abstract

Visual Question Answering (VQA) has attracted much attention since it offers insight into the relationships between the multi-modal analysis of images and natural language. Most of the current algorithms are incapable of answering open-domain questions that require to perform reasoning beyond the image contents. To address this issue, we propose a novel framework which endows the model capabilities in answering more complex questions by leveraging massive external knowledge with dynamic memory networks. Specifically, the questions along with the corresponding images trigger a process to retrieve the relevant information in external knowledge bases, which are embedded into a continuous vector space by preserving the entity-relation structures. Afterwards, we employ dynamic memory networks to attend to the large body of facts in the knowledge graph and images, and then perform reasoning over these facts to generate corresponding answers. Extensive experiments demonstrate that our model not only achieves the state-of-the-art performance in the visual question answering task, but can also answer open-domain questions effectively by leveraging the external knowledge.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1712.00733

PDF

https://arxiv.org/pdf/1712.00733


Similar Posts

Comments