papers AI Learner
The Github is limit! Click to go to the new site.

Deep Long Short-Term Memory Adaptive Beamforming Networks For Multichannel Robust Speech Recognition

2017-11-21
Zhong Meng, Shinji Watanabe, John R. Hershey, Hakan Erdogan

Abstract

Far-field speech recognition in noisy and reverberant conditions remains a challenging problem despite recent deep learning breakthroughs. This problem is commonly addressed by acquiring a speech signal from multiple microphones and performing beamforming over them. In this paper, we propose to use a recurrent neural network with long short-term memory (LSTM) architecture to adaptively estimate real-time beamforming filter coefficients to cope with non-stationary environmental noise and dynamic nature of source and microphones positions which results in a set of timevarying room impulse responses. The LSTM adaptive beamformer is jointly trained with a deep LSTM acoustic model to predict senone labels. Further, we use hidden units in the deep LSTM acoustic model to assist in predicting the beamforming filter coefficients. The proposed system achieves 7.97% absolute gain over baseline systems with no beamforming on CHiME-3 real evaluation set.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1711.08016

PDF

https://arxiv.org/pdf/1711.08016


Similar Posts

Comments