papers AI Learner
The Github is limit! Click to go to the new site.

Neural source-filter-based waveform model for statistical parametric speech synthesis

2019-04-27
Xin Wang, Shinji Takaki, Junichi Yamagishi

Abstract

Neural waveform models such as the WaveNet are used in many recent text-to-speech systems, but the original WaveNet is quite slow in waveform generation because of its autoregressive (AR) structure. Although faster non-AR models were recently reported, they may be prohibitively complicated due to the use of a distilling training method and the blend of other disparate training criteria. This study proposes a non-AR neural source-filter waveform model that can be directly trained using spectrum-based training criteria and the stochastic gradient descent method. Given the input acoustic features, the proposed model first uses a source module to generate a sine-based excitation signal and then uses a filter module to transform the excitation signal into the output speech waveform. Our experiments demonstrated that the proposed model generated waveforms at least 100 times faster than the AR WaveNet and the quality of its synthetic speech is close to that of speech generated by the AR WaveNet. Ablation test results showed that both the sine-wave excitation signal and the spectrum-based training criteria were essential to the performance of the proposed model.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1810.11946

PDF

http://arxiv.org/pdf/1810.11946


Comments

Content