Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient
release_ykmkgz6uqnfnhnp2u5mbhve744
by
Samuele Tosatto, João Carvalho, Jan Peters
2020
Abstract
Off-policy Reinforcement Learning (RL) holds the promise of better data
efficiency as it allows sample reuse and potentially enables safe interaction
with the environment. Current off-policy policy gradient methods either suffer
from high bias or high variance, delivering often unreliable estimates. The
price of inefficiency becomes evident in real-world scenarios such as
interaction-driven robot learning, where the success of RL has been rather
limited, and a very high sample cost hinders straightforward application. In
this paper, we propose a nonparametric Bellman equation, which can be solved in
closed form. The solution is differentiable w.r.t the policy parameters and
gives access to an estimation of the policy gradient. In this way, we avoid the
high variance of importance sampling approaches, and the high bias of
semi-gradient methods. We empirically analyze the quality of our gradient
estimate against state-of-the-art methods, and show that it outperforms the
baselines in terms of sample efficiency on classical control tasks.
In text/plain
format
Archived Files and Locations
application/pdf 4.2 MB
file_yjutopqbs5ah5bck3owjepadsi
|
arxiv.org (repository) web.archive.org (webarchive) |
2010.14771v2
access all versions, variants, and formats of this works (eg, pre-prints)