papers AI Learner
The Github is limit! Click to go to the new site.

Attention on Attention: Architectures for Visual Question Answering

2018-03-21
Jasdeep Singh, Vincent Ying, Alex Nutkiewicz

Abstract

Visual Question Answering (VQA) is an increasingly popular topic in deep learning research, requiring coordination of natural language processing and computer vision modules into a single architecture. We build upon the model which placed first in the VQA Challenge by developing thirteen new attention mechanisms and introducing a simplified classifier. We performed 300 GPU hours of extensive hyperparameter and architecture searches and were able to achieve an evaluation score of 64.78%, outperforming the existing state-of-the-art single model’s validation score of 63.15%.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1803.07724

PDF

https://arxiv.org/pdf/1803.07724


Similar Posts

Comments