Abstract
This paper introduces Taco-VC, a novel architecture for voice conversion (VC) based on the Tacotron synthesizer, which is a sequence-to-sequence with attention model. Most current prosody preserving VC systems suffer from target similarity and quality issues in the converted speech. To address these problems, we first recover initial prosody preserving speech using a Phonetic Posteriorgrams (PPGs) based Tacotron synthesizer. Then, we enhance the quality of the converted speech using a novel speech-enhancement network, which is based on a combination of phoneme recognition and Tacotron networks. The final converted speech is generated by a Wavenet vocoder conditioned on Mel Spectrograms. Given the advantages of a single speaker Tacotron and Wavenet, we show how to adapt them to other speakers with limited training data. We evaluate our solution on the VCC 2018 SPOKE task. Using public mid-size datasets, our method outperforms the baseline and achieves competitive results
Abstract (translated by Google)
URL
http://arxiv.org/abs/1904.03522