Abstract
Convolutional Neural Networks and Deep Learning classification systems in general have been shown to be vulnerable to attack by specially crafted data samples that appear to belong to one class but are instead classified as another, commonly known as adversarial examples. A variety of attack strategies have been proposed to craft these samples; however, there is no standard model that is used to compare the success of each type of attack. Furthermore, there is no literature currently available that evaluates how common hyperparameters and optimization strategies may impact a model’s ability to resist these samples. This research bridges that lack of awareness and provides a means for the selection of training and model parameters in future research on evasion attacks against convolutional neural networks. The findings of this work indicate that the selection of model hyperparameters does impact the ability of a model to resist attack, although they alone cannot prevent the existence of adversarial examples.
Abstract (translated by Google)
URL
http://arxiv.org/abs/1902.05586