papers AI Learner
The Github is limit! Click to go to the new site.

Solving Visual Madlibs with Multiple Cues

2016-08-11
Tatiana Tommasi, Arun Mallya, Bryan Plummer, Svetlana Lazebnik, Alexander C. Berg, Tamara L. Berg

Abstract

This paper focuses on answering fill-in-the-blank style multiple choice questions from the Visual Madlibs dataset. Previous approaches to Visual Question Answering (VQA) have mainly used generic image features from networks trained on the ImageNet dataset, despite the wide scope of questions. In contrast, our approach employs features derived from networks trained for specialized tasks of scene classification, person activity prediction, and person and object attribute prediction. We also present a method for selecting sub-regions of an image that are relevant for evaluating the appropriateness of a putative answer. Visual features are computed both from the whole image and from local regions, while sentences are mapped to a common space using a simple normalized canonical correlation analysis (CCA) model. Our results show a significant improvement over the previous state of the art, and indicate that answering different question types benefits from examining a variety of image cues and carefully choosing informative image sub-regions.

Abstract (translated by Google)

本文着重从Visual Madlibs数据集中回答空白样式的多选题。视觉问题回答(VQA)的先前方法主要使用来自ImageNet数据集上训练的网络的通用图像特征,尽管问题范围很广。相比之下,我们的方法使用来自网络的特征来训练场景分类,人员活动预测和人物属性预测等专业任务。我们还提出了一种选择与评估推定答案的适当性相关的图像的子区域的方法。从整个图像和局部区域计算视觉特征,而使用简单的归一化典型相关分析(CCA)模型将句子映射到公共空间。我们的研究结果表明,与以前的技术水平相比有了显着的改进,并且指出回答不同的问题类型可以从检查各种图像提示和仔细选择信息图像子区域中受益。

URL

https://arxiv.org/abs/1608.03410

PDF

https://arxiv.org/pdf/1608.03410


Similar Posts

Comments