papers AI Learner
The Github is limit! Click to go to the new site.

Smooth Adversarial Examples

2019-03-28
Hanwei Zhang, Yannis Avrithis, Teddy Furon, Laurent Amsaleg

Abstract

This paper investigates the visual quality of the adversarial examples. Recent papers propose to smooth the perturbations to get rid of high frequency artefacts. In this work, smoothing has a different meaning as it perceptually shapes the perturbation according to the visual content of the image to be attacked. The perturbation becomes locally smooth on the flat areas of the input image, but it may be noisy on its textured areas and sharp across its edges. This operation relies on Laplacian smoothing, well-known in graph signal processing, which we integrate in the attack pipeline. We benchmark several attacks with and without smoothing under a white-box scenario and evaluate their transferability. Despite the additional constraint of smoothness, our attack has the same probability of success at lower distortion.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1903.11862

PDF

http://arxiv.org/pdf/1903.11862


Similar Posts

Comments