papers AI Learner
The Github is limit! Click to go to the new site.

Scaleable input gradient regularization for adversarial robustness

2019-05-27
Chris Finlay, Adam M Oberman

Abstract

Input gradient regularization is not thought to be an effective means for promoting adversarial robustness. In this work we revisit this regularization scheme with some new ingredients. First, we derive new per-image theoretical robustness bounds based on local gradient information, and curvature information when available. These bounds strongly motivate input gradient regularization. Second, we implement a scaleable version of input gradient regularization which avoids double backpropagation: adversarially robust ImageNet models are trained in 33 hours on four consumer grade GPUs. Finally, we show experimentally that input gradient regularization is competitive with adversarial training.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1905.11468

PDF

https://arxiv.org/pdf/1905.11468


Similar Posts

Comments