papers AI Learner
The Github is limit! Click to go to the new site.

Time Domain Audio Visual Speech Separation

2019-04-07
Jian Wu, Yong Xu, Shi-Xiong Zhang, Lian-Wu Chen, Meng Yu, Lei Xie, Dong Yu

Abstract

Audio-visual multi-modal modeling has been demonstrated to be effective in many speech related tasks, such as speech recognition and speech enhancement. This paper introduces a new time-domain audio-visual architecture for target speaker extraction from monaural mixtures. The architecture generalizes the previous TasNet (time-domain speech separation network) to enable multi-modal learning and at meanwhile it extends the classical audio-visual speech separation from frequency-domain to time-domain. The main components of proposed architecture include an audio encoder, a video encoder which can extract lip embedding from video steams, a multi-modal separation network and an audio decoder. Experiments on simulated mixtures based on recently released LRS2 dataset show that our method can bring 3dB+ and 4dB+ Si-SNR improvements on 2 and 3 speakers cases respectively, compared to audio-only TasNet and frequency domain audio-visual networks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.03760

PDF

http://arxiv.org/pdf/1904.03760


Similar Posts

Comments