papers AI Learner
The Github is limit! Click to go to the new site.

Adversarial attacks hidden in plain sight

2019-02-25
Jan Philip Göpfert, Heiko Wersing, Barbara Hammer

Abstract

Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. We underline the severity of the issue by presenting a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.09286

PDF

http://arxiv.org/pdf/1902.09286


Similar Posts

Comments