papers AI Learner
The Github is limit! Click to go to the new site.

Unsupervised Visuomotor Control through Distributional Planning Networks

2019-02-14
Tianhe Yu, Gleb Shevchuk, Dorsa Sadigh, Chelsea Finn

Abstract

While reinforcement learning (RL) has the potential to enable robots to autonomously acquire a wide range of skills, in practice, RL usually requires manual, per-task engineering of reward functions, especially in real world settings where aspects of the environment needed to compute progress are not directly accessible. To enable robots to autonomously learn skills, we instead consider the problem of reinforcement learning without access to rewards. We aim to learn an unsupervised embedding space under which the robot can measure progress towards a goal for itself. Our approach explicitly optimizes for a metric space under which action sequences that reach a particular state are optimal when the goal is the final state reached. This enables learning effective and control-centric representations that lead to more autonomous reinforcement learning algorithms. Our experiments on three simulated environments and two real-world manipulation problems show that our method can learn effective goal metrics from unlabeled interaction, and use the learned goal metrics for autonomous reinforcement learning.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1902.05542

PDF

http://arxiv.org/pdf/1902.05542


Similar Posts

Comments