papers AI Learner
The Github is limit! Click to go to the new site.

Evaluating the Usability of Automatically Generated Captions for People who are Deaf or Hard of Hearing

2017-12-06
Sushant Kafle, Matt Huenerfauth

Abstract

The accuracy of Automated Speech Recognition (ASR) technology has improved, but it is still imperfect in many settings. Researchers who evaluate ASR performance often focus on improving the Word Error Rate (WER) metric, but WER has been found to have little correlation with human-subject performance on many applications. We propose a new captioning-focused evaluation metric that better predicts the impact of ASR recognition errors on the usability of automatically generated captions for people who are Deaf or Hard of Hearing (DHH). Through a user study with 30 DHH users, we compared our new metric with the traditional WER metric on a caption usability evaluation task. In a side-by-side comparison of pairs of ASR text output (with identical WER), the texts preferred by our new metric were preferred by DHH participants. Further, our metric had significantly higher correlation with DHH participants’ subjective scores on the usability of a caption, as compared to the correlation between WER metric and participant subjective scores. This new metric could be used to select ASR systems for captioning applications, and it may be a better metric for ASR researchers to consider when optimizing ASR systems.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1712.02033

PDF

https://arxiv.org/pdf/1712.02033


Similar Posts

Comments