papers AI Learner
The Github is limit! Click to go to the new site.

Balancing Goal Obfuscation and Goal Legibility in Settings with Cooperative and Adversarial Observers

2019-05-25
Anagha Kulkarni, Siddharth Srivastava, Subbarao Kambhampati

Abstract

In order to be useful in the real world, AI agents need to plan and act in the presence of others, who may include adversarial and cooperative entities. In this paper, we consider the problem where an autonomous agent needs to act in a manner that clarifies its objectives to cooperative entities while preventing adversarial entities from inferring those objectives. We show that this problem is solvable when cooperative entities and adversarial entities use different types of sensors and/or prior knowledge. We develop two new solution approaches for computing such plans. One approach provides an optimal solution to the problem by using an IP solver to provide maximum obfuscation for adversarial entities while providing maximum legibility for cooperative entities in the environment, whereas the other approach provides a satisficing solution using heuristic-guided forward search to achieve preset levels of obfuscation and legibility for adversarial and cooperative entities respectively. We show the feasibility and utility of our algorithms through extensive empirical evaluation on problems derived from planning benchmarks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.10672

PDF

http://arxiv.org/pdf/1905.10672


Similar Posts

Comments