papers AI Learner
The Github is limit! Click to go to the new site.

Reconstructing faces from voices

2019-05-25
Yandong Wen, Rita Singh, Bhiksha Raj

Abstract

Voice profiling aims at inferring various human parameters from their speech, e.g. gender, age, etc. In this paper, we address the challenge posed by a subtask of voice profiling - reconstructing someone’s face from their voice. The task is designed to answer the question: given an audio clip spoken by an unseen person, can we picture a face that has as many common elements, or associations as possible with the speaker, in terms of identity? To address this problem, we propose a simple but effective computational framework based on generative adversarial networks (GANs). The network learns to generate faces from voices by matching the identities of generated faces to those of the speakers, on a training set. We evaluate the performance of the network by leveraging a closely related task - cross-modal matching. The results show that our model is able to generate faces that match several biometric characteristics of the speaker, and results in matching accuracies that are much better than chance.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.10604

PDF

http://arxiv.org/pdf/1905.10604


Similar Posts

Comments