papers AI Learner
The Github is limit! Click to go to the new site.

Policy Optimization with Second-Order Advantage Information

2019-05-29
Jiajin Li, Baoxiang Wang

Abstract

Policy optimization on high-dimensional continuous control tasks exhibits its difficulty caused by the large variance of the policy gradient estimators. We present the action subspace dependent gradient (ASDG) estimator which incorporates the Rao-Blackwell theorem (RB) and Control Variates (CV) into a unified framework to reduce the variance. To invoke RB, our proposed algorithm (POSA) learns the underlying factorization structure among the action space based on the second-order advantage information. POSA captures the quadratic information explicitly and efficiently by utilizing the wide & deep architecture. Empirical studies show that our proposed approach demonstrates the performance improvements on high-dimensional synthetic settings and OpenAI Gym’s MuJoCo continuous control tasks.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1805.03586

PDF

http://arxiv.org/pdf/1805.03586


Similar Posts

Comments