papers AI Learner
The Github is limit! Click to go to the new site.

Learning Visually-Grounded Semantics from Contrastive Adversarial Samples

2018-06-27
Haoyue Shi, Jiayuan Mao, Tete Xiao, Yuning Jiang, Jian Sun

Abstract

We study the problem of grounding distributional representations of texts on the visual domain, namely visual-semantic embeddings (VSE for short). Begin with an insightful adversarial attack on VSE embeddings, we show the limitation of current frameworks and image-text datasets (e.g., MS-COCO) both quantitatively and qualitatively. The large gap between the number of possible constitutions of real-world semantics and the size of parallel data, to a large extent, restricts the model to establish the link between textual semantics and visual concepts. We alleviate this problem by augmenting the MS-COCO image captioning datasets with textual contrastive adversarial samples. These samples are synthesized using linguistic rules and the WordNet knowledge base. The construction procedure is both syntax- and semantics-aware. The samples enforce the model to ground learned embeddings to concrete concepts within the image. This simple but powerful technique brings a noticeable improvement over the baselines on a diverse set of downstream tasks, in addition to defending known-type adversarial attacks. We release the codes at this https URL.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1806.10348

PDF

https://arxiv.org/pdf/1806.10348


Similar Posts

Comments