papers AI Learner
The Github is limit! Click to go to the new site.

Speech-Based Visual Question Answering

2017-09-16
Ted Zhang, Dengxin Dai, Tinne Tuytelaars, Marie-Francine Moens, Luc Van Gool

Abstract

This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question. Two methods are studied: an end-to-end, deep neural network that directly uses audio waveforms as input versus a pipelined approach that performs ASR (Automatic Speech Recognition) on the question, followed by text-based visual question answering. Furthermore, we investigate the robustness of both methods by injecting various levels of noise into the spoken question and find both methods to be tolerate noise at similar levels.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1705.00464

PDF

https://arxiv.org/pdf/1705.00464


Similar Posts

Comments