pith. machine review for the scientific record. sign in

arxiv: 1805.11706 · v4 · submitted 2018-05-29 · 💻 cs.LG · cs.AI· cs.RO· cs.SY· stat.ML

Recognition: unknown

Supervised Policy Update for Deep Reinforcement Learning

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.ROcs.SYstat.ML
keywords policyoptimizationmethodologynon-parameterizedproblemsupervisedtrpodeep
0
0 comments X
read the original abstract

We propose a new sample-efficient methodology, called Supervised Policy Update (SPU), for deep reinforcement learning. Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space. Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples. The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem. We show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology. The SPU implementation is much simpler than TRPO. In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.