papers AI Learner
The Github is limit! Click to go to the new site.

Model Primitive Hierarchical Lifelong Reinforcement Learning

2019-03-04
Bohan Wu, Jayesh K. Gupta, Mykel J. Kochenderfer

Abstract

Learning interpretable and transferable subpolicies and performing task decomposition from a single, complex task is difficult. Some traditional hierarchical reinforcement learning techniques enforce this decomposition in a top-down manner, while meta-learning techniques require a task distribution at hand to learn such decompositions. This paper presents a framework for using diverse suboptimal world models to decompose complex task solutions into simpler modular subpolicies. This framework performs automatic decomposition of a single source task in a bottom up manner, concurrently learning the required modular subpolicies as well as a controller to coordinate them. We perform a series of experiments on high dimensional continuous action control tasks to demonstrate the effectiveness of this approach at both complex single task learning and lifelong learning. Finally, we perform ablation studies to understand the importance and robustness of different elements in the framework and limitations to this approach.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1903.01567

PDF

https://arxiv.org/pdf/1903.01567


Similar Posts

Comments