papers AI Learner
The Github is limit! Click to go to the new site.

Arena: A General Evaluation Platform and Building Toolkit for Multi-Agent Intelligence

2019-05-17
Yuhang Song, Jianyi Wang, Thomas Lukasiewicz, Zhenghua Xu, Mai Xu, Zihan Ding, Lianlong Wu

Abstract

Learning agents that are not only capable of taking tests but also innovating is becoming the next hot topic in AI. One of the most promising paths towards this vision is multi-agent learning, where agents act as the environment for each other, and improving each agent means proposing new problems for the others. However, existing evaluation platforms are either not compatible with multi-agent settings, or limited to a specific game. That is, there is not yet a general evaluation platform for research on multi-agent intelligence. To this end, we introduce Arena, a general evaluation platform for multi-agent intelligence with \NumGames games of diverse logic and representations. Furthermore, multi-agent intelligence is still at the stage where many problems remain unexplored. Thus, we provide a building toolkit for researchers to invent and build novel multi-agent problems from the provided game set with little efforts. Finally, we provide python implementations of five state-of-the-art deep multi-agent reinforcement learning baselines. Along with the baseline implementations, we release a set of 100 best agents/teams that we can train with different training schemes for each game, as the base for evaluating agents with population performance, so that the research community can perform comparisons under a stable and uniform standard.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.08085

PDF

http://arxiv.org/pdf/1905.08085


Similar Posts

Comments