papers AI Learner
The Github is limit! Click to go to the new site.

Semantic speech retrieval with a visually grounded model of untranscribed speech

2018-10-31
Herman Kamper, Gregory Shakhnarovich, Karen Livescu

Abstract

There is growing interest in models that can learn from unlabelled speech paired with visual context. This setting is relevant for low-resource speech processing, robotics, and human language acquisition research. Here we study how a visually grounded speech model, trained on images of scenes paired with spoken captions, captures aspects of semantics. We use an external image tagger to generate soft text labels from images, which serve as targets for a neural model that maps untranscribed speech to (semantic) keyword labels. We introduce a newly collected data set of human semantic relevance judgements and an associated task, semantic speech retrieval, where the goal is to search for spoken utterances that are semantically relevant to a given text query. Without seeing any text, the model trained on parallel speech and images achieves a precision of almost 60% on its top ten semantic retrievals. Compared to a supervised model trained on transcriptions, our model matches human judgements better by some measures, especially in retrieving non-verbatim semantic matches. We perform an extensive analysis of the model and its resulting representations.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1710.01949

PDF

https://arxiv.org/pdf/1710.01949


Similar Posts

Comments