papers AI Learner
The Github is limit! Click to go to the new site.

Adversarial attack on Speech-to-Text Recognition Models

2019-01-26
Xiaolei Liu, Kun Wan, Yufei Ding

Abstract

Recent studies have highlighted audio adversarial examples as a ubiquitous threat to state-of-the-art automatic speech recognition systems. Nonetheless, the efficiency and robustness of existing works are not yet satisfactory due to the large search space of audio. In this paper, we introduce the first study of \textit{weighted-sampling audio adversarial examples}, specifically focusing on the factor of the numbers and the positions of distortion to reduce the search space. Meanwhile, we propose a new attack scenario, audio injection attack, which offers some novel insights in the concealment of adversarial attack. Our experimental study shows that we can generate audio adversarial examples with low noise and high robustness at the minute level, compared to other hour-level state-of-the-art methods. \footnote{We encourage you to listen to these audio adversarial examples on this anonymous website.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.10300

PDF

http://arxiv.org/pdf/1901.10300


Similar Posts

Comments