Joint Goal and Strategy Inference across Heterogeneous Demonstrators via
Reward Network Distillation
release_gecfkvfxqvbqdkp3fqye76mjs4
by
Letian Chen, Rohan Paleja, Muyleng Ghuy, Matthew Gombolay
2020
Abstract
Reinforcement learning (RL) has achieved tremendous success as a general
framework for learning how to make decisions. However, this success relies on
the interactive hand-tuning of a reward function by RL experts. On the other
hand, inverse reinforcement learning (IRL) seeks to learn a reward function
from readily-obtained human demonstrations. Yet, IRL suffers from two major
limitations: 1) reward ambiguity - there are an infinite number of possible
reward functions that could explain an expert's demonstration and 2)
heterogeneity - human experts adopt varying strategies and preferences, which
makes learning from multiple demonstrators difficult due to the common
assumption that demonstrators seeks to maximize the same reward. In this work,
we propose a method to jointly infer a task goal and humans' strategic
preferences via network distillation. This approach enables us to distill a
robust task reward (addressing reward ambiguity) and to model each strategy's
objective (handling heterogeneity). We demonstrate our algorithm can better
recover task reward and strategy rewards and imitate the strategies in two
simulated tasks and a real-world table tennis task.
In text/plain
format
Archived Files and Locations
application/pdf 2.9 MB
file_tgpf72af3rf7vk4xncsm4rkhpu
|
arxiv.org (repository) web.archive.org (webarchive) |
2001.00503v2
access all versions, variants, and formats of this works (eg, pre-prints)