papers AI Learner
The Github is limit! Click to go to the new site.

Many-to-Many Voice Conversion with Out-of-Dataset Speaker Support

2019-04-30
Gokce Keskin, Tyler Lee, Cory Stephenson, Oguz H. Elibol

Abstract

We present a Cycle-GAN based many-to-many voice conversion method that can convert between speakers that are not in the training set. This property is enabled through speaker embeddings generated by a neural network that is jointly trained with the Cycle-GAN. In contrast to prior work in this domain, our method enables conversion between an out-of-dataset speaker and a target speaker in either direction and does not require re-training. Out-of-dataset speaker conversion quality is evaluated using an independently trained speaker identification model, and shows good style conversion characteristics for previously unheard speakers. Subjective tests on human listeners show style conversion quality for in-dataset speakers is comparable to the state-of-the-art baseline model.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1905.02525

PDF

https://arxiv.org/pdf/1905.02525


Comments

Content