papers AI Learner
The Github is limit! Click to go to the new site.

Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation

2017-12-16
Jean-Benoit Delbrouck, Stéphane Dupont, Omar Seddati

Abstract

In Multimodal Neural Machine Translation (MNMT), a neural model generates a translated sentence that describes an image, given the image itself and one source descriptions in English. This is considered as the multimodal image caption translation task. The images are processed with Convolutional Neural Network (CNN) to extract visual features exploitable by the translation model. So far, the CNNs used are pre-trained on object detection and localization task. We hypothesize that richer architecture, such as dense captioning models, may be more suitable for MNMT and could lead to improved translations. We extend this intuition to the word-embeddings, where we compute both linguistic and visual representation for our corpus vocabulary. We combine and compare different confi

Abstract (translated by Google)
URL

https://arxiv.org/abs/1707.01009

PDF

https://arxiv.org/pdf/1707.01009


Similar Posts

Comments