papers AI Learner
The Github is limit! Click to go to the new site.

Vision as an Interlingua: Learning Multilingual Semantic Embeddings of Untranscribed Speech

2018-04-09
David Harwath, Galen Chuang, James Glass

Abstract

In this paper, we explore the learning of neural network embeddings for natural images and speech waveforms describing the content of those images. These embeddings are learned directly from the waveforms without the use of linguistic transcriptions or conventional speech recognition technology. While prior work has investigated this setting in the monolingual case using English speech data, this work represents the first effort to apply these techniques to languages beyond English. Using spoken captions collected in English and Hindi, we show that the same model architecture can be successfully applied to both languages. Further, we demonstrate that training a multilingual model simultaneously on both languages offers improved performance over the monolingual models. Finally, we show that these models are capable of performing semantic cross-lingual speech-to-speech retrieval.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1804.03052

PDF

https://arxiv.org/pdf/1804.03052


Similar Posts

Comments