papers AI Learner
The Github is limit! Click to go to the new site.

Utterance-level end-to-end language identification using attention-based CNN-BLSTM

2019-02-20
Weicheng Cai, Danwei Cai, Shen Huang, Ming Li

Abstract

In this paper, we present an end-to-end language identification framework, the attention-based Convolutional Neural Network-Bidirectional Long-short Term Memory (CNN-BLSTM). The model is performed on the utterance level, which means the utterance-level decision can be directly obtained from the output of the neural network. To handle speech utterances with entire arbitrary and potentially long duration, we combine CNN-BLSTM model with a self-attentive pooling layer together. The front-end CNN-BLSTM module plays a role as local pattern extractor for the variable-length inputs, and the following self-attentive pooling layer is built on top to get the fixed-dimensional utterance-level representation. We conducted experiments on NIST LRE07 closed-set task, and the results reveal that the proposed attention-based CNN-BLSTM model achieves comparable error reduction with other state-of-the-art utterance-level neural network approaches for all 3 seconds, 10 seconds, 30 seconds duration tasks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.07374

PDF

http://arxiv.org/pdf/1902.07374


Similar Posts

Comments