papers AI Learner
The Github is limit! Click to go to the new site.

Adversarially Robust Distillation

2019-05-23
Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein

Abstract

Knowledge distillation is effective for producing small high-performance neural networks for classification, but these small networks are vulnerable to adversarial attacks. We first study how robustness transfers from robust teacher to student network during knowledge distillation. We find that a large amount of robustness may be inherited by the student even when distilled on only clean images. Second, we introduce Adversarially Robust Distillation (ARD) for distilling robustness onto small student networks. ARD is an analogue of adversarial training but for distillation. In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student. In our experiments, we find that ARD student models decisively outperform adversarially trained networks of identical architecture on robust accuracy. Finally, we adapt recent fast adversarial training methods to ARD for accelerated robust distillation.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.09747

PDF

http://arxiv.org/pdf/1905.09747


Similar Posts

Comments