papers AI Learner
The Github is limit! Click to go to the new site.

Seeing isn't Believing: Practical Adversarial Attack Against Object Detectors

2019-04-12
Yue Zhao, Hong Zhu, Ruigang Liang, Qintao Shen, Shengzhi Zhang, Kai Chen

Abstract

In this paper, we design the first practical adversarial attacks against object detectors in realistic situations: the printed adversarial examples (AEs) placed with different angles and distances, could hide or create an object to deceive object detectors. To improve the robustness of AEs, we propose the novel nested AEs and introduce the image transformation techniques to simulate the variance factors such as distances, angles, illuminations, in the physical world. To let the AEs converge with so many constraints, we also design batch-variation momentum training. Evaluation results show that our adversarial examples work in real environments, capable of attacking state-of-the-art real-time object detectors (e.g., YOLO V3 and faster-RCNN) with the success rate up to 92.4% (with the distance ranging from 1m to 25m and wide angles from -60{\deg}to 60{\deg}). Our AEs are also shown to be highly transferable. Four state-of-the-art black-box models could be attacked by our AEs with high success rates.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1812.10217

PDF

http://arxiv.org/pdf/1812.10217


Similar Posts

Comments