papers AI Learner
The Github is limit! Click to go to the new site.

Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems

2019-05-09
Kazuya Kakizaki, Kosuke Yoshida

Abstract

Thanks to recent advances in Deep Neural Networks (DNNs), face recognition systems have achieved high accuracy in classification of a large number of face images. However, recent works demonstrate that DNNs could be vulnerable to adversarial examples and raise concerns about robustness of face recognition systems. In particular adversarial examples that are not restricted to small perturbations could be more serious risks since conventional certified defenses might be ineffective against them. To shed light on the vulnerability of the face recognition systems to this type of adversarial examples, we propose a flexible and efficient method to generate unrestricted adversarial examples using image translation techniques. Our method enables us to translate a source into any desired facial appearance with large perturbations so that target face recognition systems could be deceived. We demonstrate through our experiments that our method achieves about $90\%$ and $30\%$ attack success rates under a white- and black-box setting, respectively. We also illustrate that our generated images are perceptually realistic and maintain personal identity while the perturbations are large enough to defeat certified defenses.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.03421

PDF

http://arxiv.org/pdf/1905.03421


Similar Posts

Comments