papers AI Learner
The Github is limit! Click to go to the new site.

Exploring Deep and Recurrent Architectures for Optimal Control

2013-11-07
Sergey Levine

Abstract

Sophisticated multilayer neural networks have achieved state of the art results on multiple supervised tasks. However, successful applications of such multilayer networks to control have so far been limited largely to the perception portion of the control pipeline. In this paper, we explore the application of deep and recurrent neural networks to a continuous, high-dimensional locomotion task, where the network is used to represent a control policy that maps the state of the system (represented by joint angles) directly to the torques at each joint. By using a recent reinforcement learning algorithm called guided policy search, we can successfully train neural network controllers with thousands of parameters, allowing us to compare a variety of architectures. We discuss the differences between the locomotion control task and previous supervised perception tasks, present experimental results comparing various architectures, and discuss future directions in the application of techniques from deep learning to the problem of optimal control.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1311.1761

PDF

https://arxiv.org/pdf/1311.1761


Similar Posts

Comments