papers AI Learner
The Github is limit! Click to go to the new site.

Snooping Attacks on Deep Reinforcement Learning

2019-05-28
Matthew Inkawhich, Yiran Chen, Hai Li

Abstract

Adversarial attacks have exposed a significant security vulnerability in state-of-the-art machine learning models. Among these models include deep reinforcement learning agents. The existing methods for attacking reinforcement learning agents assume the adversary either has access to the target agent’s learned parameters or the environment that the agent interacts with. In this work, we propose a new class of threat models, called snooping threat models, that are unique to reinforcement learning. In these snooping threat models, the adversary does not have the ability to personally interact with the environment, and can only eavesdrop on the action and reward signals being exchanged between agent and environment. We show that adversaries operating in these highly constrained threat models can still launch devastating attacks against the target agent by training proxy models on related tasks and leveraging the transferability of adversarial examples.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1905.11832

PDF

https://arxiv.org/pdf/1905.11832


Similar Posts

Comments