papers AI Learner
The Github is limit! Click to go to the new site.

On the Utility of Model Learning in HRI

2019-01-04
Rohan Choudhury*, Gokul Swamy*, Dylan Hadfield-Menell, Anca Dragan

Abstract

Fundamental to robotics is the debate between model-based and model-free learning: should the robot build an explicit model of the world, or learn a policy directly? In the context of HRI, part of the world to be modeled is the human. One option is for the robot to treat the human as a black box and learn a policy for how they act directly. But it can also model the human as an agent, and rely on a “theory of mind” to guide or bias the learning (grey box). We contribute a characterization of the performance of these methods under the optimistic case of having an ideal theory of mind, as well as under different scenarios in which the assumptions behind the robot’s theory of mind for the human are wrong, as they inevitably will be in practice. We find that there is a significant sample complexity advantage to theory of mind methods and that they are more robust to covariate shift, but that when enough interaction data is available, black box approaches eventually dominate.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.01291

PDF

http://arxiv.org/pdf/1901.01291


Comments

Content