papers AI Learner
The Github is limit! Click to go to the new site.

Image Transformation can make Neural Networks more robust against Adversarial Examples

2019-01-10
Dang Duy Thang, Toshihiro Matsui

Abstract

Neural networks are being applied in many tasks related to IoT with encouraging results. For example, neural networks can precisely detect human, objects and animal via surveillance camera for security purpose. However, neural networks have been recently found vulnerable to well-designed input samples that called adversarial examples. Such issue causes neural networks to misclassify adversarial examples that are imperceptible to humans. We found giving a rotation to an adversarial example image can defeat the effect of adversarial examples. Using MNIST number images as the original images, we first generated adversarial examples to neural network recognizer, which was completely fooled by the forged examples. Then we rotated the adversarial image and gave them to the recognizer to find the recognizer to regain the correct recognition. Thus, we empirically confirmed rotation to images can protect pattern recognizer based on neural networks from adversarial example attacks.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1901.03037

PDF

https://arxiv.org/pdf/1901.03037


Similar Posts

Comments