papers AI Learner
The Github is limit! Click to go to the new site.

Hierarchical Policy Learning is Sensitive to Goal Space Design

2019-05-04
Zach Dwiel, Madhavun Candadai, Mariano J. Phielipp, Arjun K. Bansal

Abstract

Hierarchy in reinforcement learning agents allows for control at multiple time scales yielding improved sample efficiency, the ability to deal with long time horizons and transferability of sub-policies to tasks outside the training distribution. It is often implemented as a master policy providing goals to a sub-policy. Ideally, we would like the goal-spaces to be learned, however, properties of optimal goal spaces still remain unknown and consequently there is no method yet to learn optimal goal spaces. Motivated by this, we systematically analyze how various modifications to the ground-truth goal-space affect learning in hierarchical models with the aim of identifying important properties of optimal goal spaces. Our results show that, while rotation of ground-truth goal spaces and noise had no effect, having additional unnecessary factors significantly impaired learning in hierarchical models.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1905.01537

PDF

http://arxiv.org/pdf/1905.01537


Similar Posts

Comments