papers AI Learner
The Github is limit! Click to go to the new site.

Decentralization of Multiagent Policies by Learning What to Communicate

2019-01-24
James Paulos, Steven W. Chen, Daigo Shishika, Vijay Kumar

Abstract

Effective communication is required for teams of robots to solve sophisticated collaborative tasks. In practice it is typical for both the encoding and semantics of communication to be manually defined by an expert; this is true regardless of whether the behaviors themselves are bespoke, optimization based, or learned. We present an agent architecture and training methodology using neural networks to learn task-oriented communication semantics based on the example of a communication-unaware expert policy. A perimeter defense game illustrates the system’s ability to handle dynamically changing numbers of agents and its graceful degradation in performance as communication constraints are tightened or the expert’s observability assumptions are broken.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.08490

PDF

http://arxiv.org/pdf/1901.08490


Comments

Content