papers AI Learner
The Github is limit! Click to go to the new site.

Bandlimiting Neural Networks Against Adversarial Attacks

2019-05-30
Yuping Lin, Kasra Ahmadi K. A., Hui Jiang

Abstract

In this paper, we study the adversarial attack and defence problem in deep learning from the perspective of Fourier analysis. We first explicitly compute the Fourier transform of deep ReLU neural networks and show that there exist decaying but non-zero high frequency components in the Fourier spectrum of neural networks. We demonstrate that the vulnerability of neural networks towards adversarial samples can be attributed to these insignificant but non-zero high frequency components. Based on this analysis, we propose to use a simple post-averaging technique to smooth out these high frequency components to improve the robustness of neural networks against adversarial attacks. Experimental results on the ImageNet dataset have shown that our proposed method is universally effective to defend many existing adversarial attacking methods proposed in the literature, including FGSM, PGD, DeepFool and C&W attacks. Our post-averaging method is simple since it does not require any re-training, and meanwhile it can successfully defend over 95% of the adversarial samples generated by these methods without introducing any significant performance degradation (less than 1%) on the original clean images.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.12797

PDF

http://arxiv.org/pdf/1905.12797


Similar Posts

Comments