papers AI Learner
The Github is limit! Click to go to the new site.

Adaptive Gradient for Adversarial Perturbations Generation

2019-04-12
Yatie Xiao, Chi-Man Pun

Abstract

Deep Neural Networks have achieved remarkable success in computer vision, natural language processing, and audio tasks. However, in image classification domain, researches proved that deep neural models are easily fooled when affected by perturbation, which may cause server results. Many attack methods generate adversarial perturbation with large-scale pixel modification and low similarity between origin and corresponding adversarial examples, to address these issues, we propose an adversarial approach with the adaptive mechanism by self-adjusting perturbation intensity to seek the boundary distance between different classes directly which can escape local minimal in gradient processing. In this paper, we evaluate several traditional perturbations generating methods with our works. Experimental results show that our approach works well and outperform recent techniques in the change of misclassifying image prediction, and presents excellent efficiency in fooling deep network models.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.01220

PDF

http://arxiv.org/pdf/1902.01220


Similar Posts

Comments