papers AI Learner
The Github is limit! Click to go to the new site.

Semantic query-by-example speech search using visual grounding

2019-04-15
Herman Kamper, Aristotelis Anastassiou, Karen Livescu

Abstract

A number of recent studies have started to investigate how speech systems can be trained on untranscribed speech by leveraging accompanying images at training time. Examples of tasks include keyword prediction and within- and across-mode retrieval. Here we consider how such models can be used for query-by-example (QbE) search, the task of retrieving utterances relevant to a given spoken query. We are particularly interested in semantic QbE, where the task is not only to retrieve utterances containing exact instances of the query, but also utterances whose meaning is relevant to the query. We follow a segmental QbE approach where variable-duration speech segments (queries, search utterances) are mapped to fixed-dimensional embedding vectors. We show that a QbE system using an embedding function trained on visually grounded speech data outperforms a purely acoustic QbE system in terms of both exact and semantic retrieval performance.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.07078

PDF

http://arxiv.org/pdf/1904.07078


Comments

Content