papers AI Learner
The Github is limit! Click to go to the new site.

FurcaNeXt: End-to-end monaural speech separation with dynamic gated dilated temporal convolutional networks

2019-02-12
Ziqiang Shi, Huibin Lin, Liu Liu, Rujie Liu, Jiqing Han

Abstract

Deep dilated temporal convolutional networks (TCN) have been proved to be very effective in sequence modeling. In this paper we propose several improvements of TCN for end-to-end approach to monaural speech separation, which consists of 1) multi-scale dynamic weighted gated dilated convolutional pyramids network (FurcaPy), 2) gated TCN with intra-parallel convolutional components (FurcaPa), 3) weight-shared multi-scale gated TCN (FurcaSh), 4) dilated TCN with gated difference-convolutional component (FurcaSu), that all these networks take the mixed utterance of two speakers and maps it to two separated utterances, where each utterance contains only one speaker’s voice. For the objective, we propose to train the network by directly optimizing utterance level signal-to-distortion ratio (SDR) in a permutation invariant training (PIT) style. Our experiments on the the public WSJ0-2mix data corpus results in 18.1dB SDR improvement, which shows our proposed networks can leads to performance improvement on the speaker separation task.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.04891

PDF

http://arxiv.org/pdf/1902.04891


Similar Posts

下一篇 Puppet Dubbing

Comments