papers AI Learner
The Github is limit! Click to go to the new site.

Time-Contrastive Learning Based DNN Bottleneck Features for Text-Dependent Speaker Verification

2019-05-11
Achintya Kr. Sarkar, Zheng-Hua Tan

Abstract

In this paper, we present a time-contrastive learning (TCL) based bottleneck (BN)feature extraction method for speech signals with an application to text-dependent (TD) speaker verification (SV). It is well-known that speech signals exhibit quasi-stationary behavior in and only in a short interval, and the TCL method aims to exploit this temporal structure. More specifically, it trains deep neural networks (DNNs) to discriminate temporal events obtained by uniformly segmenting speech signals, in contrast to existing DNN based BN feature extraction methods that train DNNs using labeled data to discriminate speakers or pass-phrases or phones or a combination of them. In the context of speaker verification, speech data of fixed pass-phrases are used for TCL-BN training, while the pass-phrases used for TCL-BN training are excluded from being used for SV, so that the learned features can be considered generic. The method is evaluated on the RedDots Challenge 2016 database. Experimental results show that TCL-BN is superior to the existing speaker and pass-phrase discriminant BN features and the Mel-frequency cepstral coefficient feature for text-dependent speaker verification.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1704.02373

PDF

http://arxiv.org/pdf/1704.02373


Comments

Content