papers AI Learner
The Github is limit! Click to go to the new site.

Deep Text-to-Speech System with Seq2Seq Model

2019-03-11
Gary Wang

Abstract

Recent trends in neural network based text-to-speech/speech synthesis pipelines have employed recurrent Seq2seq architectures that can synthesize realistic sounding speech directly from text characters. These systems however have complex architectures and takes a substantial amount of time to train. We introduce several modifications to these Seq2seq architectures that allow for faster training time, and also allows us to reduce the complexity of the model architecture at the same time. We show that our proposed model can achieve attention alignment much faster than previous architectures and that good audio quality can be achieved with a model that’s much smaller in size. Sample audio available at https://soundcloud.com/gary-wang-23/sets/tts-samples-for-cmpt-419.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.07398

PDF

http://arxiv.org/pdf/1903.07398


Similar Posts

Comments