Overcoming Exploration in Reinforcement Learning with Demonstrations
release_vcvbw4vahbdijmwcgrbyymuyye
by
Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, Pieter
Abbeel
2017
Abstract
Exploration in environments with sparse rewards has been a persistent problem
in reinforcement learning (RL). Many tasks are natural to specify with a sparse
reward, and manually shaping a reward function can result in suboptimal
performance. However, finding a non-zero reward is exponentially more difficult
with increasing task horizon or action dimensionality. This puts many
real-world tasks out of practical reach of RL methods. In this work, we use
demonstrations to overcome the exploration problem and successfully learn to
perform long-horizon, multi-step robotics tasks with continuous control such as
stacking blocks with a robot arm. Our method, which builds on top of Deep
Deterministic Policy Gradients and Hindsight Experience Replay, provides an
order of magnitude of speedup over RL on simulated robotics tasks. It is simple
to implement and makes only the additional assumption that we can collect a
small set of demonstrations. Furthermore, our method is able to solve tasks not
solvable by either RL or behavior cloning alone, and often ends up
outperforming the demonstrator policy.
In text/plain
format
Archived Files and Locations
application/pdf 1.4 MB
file_ul5apvmesjbnzivf6oi46pgde4
|
arxiv.org (repository) web.archive.org (webarchive) |
1709.10089v1
access all versions, variants, and formats of this works (eg, pre-prints)