papers AI Learner
The Github is limit! Click to go to the new site.

Modeling the Formation of Social Conventions in Multi-Agent Populations

2019-05-28
Ismael T. Freire, Clement Moulin-Frier, Marti Sanchez-Fibla, Xerxes D. Arsiwalla, Paul Verschure

Abstract

What is the role of real-time control and learning on the formation of social conventions? To answer this question, we propose a computational model that matches human behavioral data in a social decision-making game that was analyzed both in discrete-time and continuous-time setups. Furthermore, unlike previous approaches, our model takes into account the role of sensorimotor control loops in embodied decision-making scenarios. For this purpose, we introduce the Control-based Reinforcement Learning (CRL) model. CRL is grounded in the Distributed Adaptive Control (DAC) theory of mind and brain, where low-level sensorimotor control is modulated through perceptual and behavioral learning in a layered structure. CRL follows these principles by implementing a feedback control loop handling the agent’s reactive behaviors (pre-wired reflexes), along with an adaptive layer that uses reinforcement learning to maximize long-term reward. We test our model in a multi-agent game-theoretic task in which coordination must be achieved to find an optimal solution. We show that CRL is able to reach human-level performance on standard game-theoretic metrics such as efficiency in acquiring rewards and fairness in reward distribution.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1802.06108

PDF

http://arxiv.org/pdf/1802.06108


Similar Posts

Comments