papers AI Learner
The Github is limit! Click to go to the new site.

End-to-End Visual Speech Recognition for Small-Scale Datasets

2019-04-02
Stavros Petridis, Yujiang Wang, Pingchuan Ma, Zuwei Li, Maja Pantic

Abstract

Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification remains limited. In addition, most of the existing methods require large amounts of data in order to achieve state-of-the-art performance, otherwise they under-perform. In this work, we present an end-to-end visual speech recognition system based on fully-connected layers and Long-Short Memory (LSTM) networks which is suitable for small-scale datasets. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by a Bidirectional LSTM (BLSTM) and the fusion of the two streams takes place via another BLSTM. An absolute improvement of 0.6%, 3.4%, 3.9%, 11.4% over the state-of-the-art is reported on the OuluVS2, CUAVE, AVLetters and AVLetters2 databases, respectively.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1904.01954

PDF

https://arxiv.org/pdf/1904.01954


Similar Posts

Comments