pith. machine review for the scientific record. sign in

arxiv: 1812.00950 · v1 · submitted 2018-12-03 · 💻 cs.LG · cs.AI· stat.ML

Recognition: unknown

Generative Adversarial Self-Imitation Learning

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIstat.ML
keywords gasillearningadversarialgenerativedelayedgoodpastpolicy
0
0 comments X
read the original abstract

This paper explores a simple regularizer for reinforcement learning by proposing Generative Adversarial Self-Imitation Learning (GASIL), which encourages the agent to imitate past good trajectories via generative adversarial imitation learning framework. Instead of directly maximizing rewards, GASIL focuses on reproducing past good trajectories, which can potentially make long-term credit assignment easier when rewards are sparse and delayed. GASIL can be easily combined with any policy gradient objective by using GASIL as a learned shaped reward function. Our experimental results show that GASIL improves the performance of proximal policy optimization on 2D Point Mass and MuJoCo environments with delayed reward and stochastic dynamics.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.