papers AI Learner
The Github is limit! Click to go to the new site.

Adversarial camera stickers: A Physical Camera Attack on Deep Learning Classifier

2019-03-21
Juncheng B. Li, Frank R. Schmidt, J. Zico Kolter

Abstract

Recent work has thoroughly documented the susceptibility of deep learning systems to adversarial examples, but most such instances directly manipulate the digital input to a classifier. Although a smaller line of work considers physical adversarial attacks, in all cases these involve manipulating the object of interest, e.g., putting a physical sticker on a object to misclassify it, or manufacturing an object specifically intended to be misclassified. In this work, we consider an alternative question: is it possible to fool deep classifiers, over all perceived objects of a certain type, by physically manipulating the camera itself? We show that this is indeed possible, that by placing a carefully crafted and mainly-translucent sticker over the lens of a camera, one can create universal perturbations of the observed images that are inconspicuous, yet reliably misclassify target objects as a different (targeted) class. To accomplish this, we propose an iterative procedure for both updating the attack perturbation (to make it adversarial for a given classifier), and the threat model itself (to ensure it is physically realizable). For example, we show that we can achieve physically-realizable attacks that fool ImageNet classifiers in a targeted fashion 49.6% of the time. This presents a new class of physically-realizable threat models to consider in the context of adversarially robust machine learning. Link to our demo video: https://youtu.be/wUVmL33Fx54

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.00759

PDF

http://arxiv.org/pdf/1904.00759


Similar Posts

Comments