Learning personalized treatments via IRL
release_sxovu4gdzzasbfdcaqpr3646mq
by
Stav Belogolovsky, Philip Korsunsky, Shie Mannor, Chen Tessler, Tom Zahavy
2020
Abstract
We consider the task of Inverse Reinforcement Learning in Contextual Markov
Decision Processes (MDPs). In this setting, contexts that define the reward and
transition kernel, are sampled from a distribution. Although the reward is a
function of the context, it is not provided to the agent; instead, it observes
demonstrations from an optimal policy. The goal is to learn the reward mapping
so that the agent will act optimally even when encountering previously unseen
contexts, also known as zero-shot transfer. We formulate this problem as a
non-differential convex optimization problem and propose a novel algorithm to
compute its subgradients. Based on this scheme, we analyze several methods both
theoretically and empirically, where we compare both the sample complexity and
scalability. Most importantly, we show both in theory and practice that our
algorithms perform zero-shot transfer (generalize to new and unseen contexts).
Specifically, we present empirical experiments in a dynamic treatment regime,
where the goal is to learn a reward function that explains the behavior of
expert physicians based on recorded data of them treating patients diagnosed
with sepsis.
In text/plain
format
Archived Files and Locations
application/pdf 1.1 MB
file_7u5eeuig4jaxxi4l4xr6wu6ovm
|
arxiv.org (repository) web.archive.org (webarchive) |
1905.09710v4
access all versions, variants, and formats of this works (eg, pre-prints)