papers AI Learner
The Github is limit! Click to go to the new site.

Articulatory and bottleneck features for speaker-independent ASR of dysarthric speech

2019-05-16
Emre Yılmaz, Vikramjit Mitra, Ganesh Sivaraman, Horacio Franco

Abstract

The rapid population aging has stimulated the development of assistive devices that provide personalized medical support to the needies suffering from various etiologies. One prominent clinical application is a computer-assisted speech training system which enables personalized speech therapy to patients impaired by communicative disorders in the patient’s home environment. Such a system relies on the robust automatic speech recognition (ASR) technology to be able to provide accurate articulation feedback. With the long-term aim of developing off-the-shelf ASR systems that can be incorporated in clinical context without prior speaker information, we compare the ASR performance of speaker-independent bottleneck and articulatory features on dysarthric speech used in conjunction with dedicated neural network-based acoustic models that have been shown to be robust against spectrotemporal deviations. We report ASR performance of these systems on two dysarthric speech datasets of different characteristics to quantify the achieved performance gains. Despite the remaining performance gap between the dysarthric and normal speech, significant improvements have been reported on both datasets using speaker-independent ASR architectures.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.06533

PDF

http://arxiv.org/pdf/1905.06533


Similar Posts

Comments