papers AI Learner
The Github is limit! Click to go to the new site.

Modality Attention for End-to-End Audio-visual Speech Recognition

2019-04-23
Pan Zhou, Wenwen Yang, Wei Chen, Yanfeng Wang, Jia Jia

Abstract

Audio-visual speech recognition (AVSR) system is thought to be one of the most promising solutions for robust speech recognition, especially in noisy environment. In this paper, we propose a novel multimodal attention based method for audio-visual speech recognition which could automatically learn the fused representation from both modalities based on their importance. Our method is realized using state-of-the-art sequence-to-sequence (Seq2seq) architectures. Experimental results show that relative improvements from 2% up to 36% over the auditory modality alone are obtained depending on the different signal-to-noise-ratio (SNR). Compared to the traditional feature concatenation methods, our proposed approach can achieve better recognition performance under both clean and noisy conditions. We believe modality attention based end-to-end method can be easily generalized to other multimodal tasks with correlated information.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1811.05250

PDF

http://arxiv.org/pdf/1811.05250


Similar Posts

Comments