Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient release_2w33jtlqe5cxvejf3om7myjc6e

by Samuele Tosatto, João Carvalho, Jan Peters

Released as a article .

2021  

Abstract

Off-policy Reinforcement Learning (RL) holds the promise of better data efficiency as it allows sample reuse and potentially enables safe interaction with the environment. Current off-policy policy gradient methods either suffer from high bias or high variance, delivering often unreliable estimates. The price of inefficiency becomes evident in real-world scenarios such as interaction-driven robot learning, where the success of RL has been rather limited, and a very high sample cost hinders straightforward application. In this paper, we propose a nonparametric Bellman equation, which can be solved in closed form. The solution is differentiable w.r.t the policy parameters and gives access to an estimation of the policy gradient. In this way, we avoid the high variance of importance sampling approaches, and the high bias of semi-gradient methods. We empirically analyze the quality of our gradient estimate against state-of-the-art methods, and show that it outperforms the baselines in terms of sample efficiency on classical control tasks.
In text/plain format

Archived Files and Locations

application/pdf  4.7 MB
file_q3vudhaq4rewpbbbf4los3343m
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2021-06-07
Version   v3
Language   en ?
arXiv  2010.14771v3
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 712853d5-b9f7-4db3-be50-fb84027a1b16
API URL: JSON