pith. machine review for the scientific record. sign in

arxiv: 1806.05635 · v1 · submitted 2018-06-14 · 💻 cs.LG · cs.AI· stat.ML

Recognition: unknown

Self-Imitation Learning

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIstat.ML
keywords explorationactor-criticalgorithmgoodimproveslearningpastself-imitation
0
0 comments X
read the original abstract

This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.