Joint Goal and Strategy Inference across Heterogeneous Demonstrators via Reward Network Distillation release_gecfkvfxqvbqdkp3fqye76mjs4

by Letian Chen, Rohan Paleja, Muyleng Ghuy, Matthew Gombolay

Released as a article .

2020  

Abstract

Reinforcement learning (RL) has achieved tremendous success as a general framework for learning how to make decisions. However, this success relies on the interactive hand-tuning of a reward function by RL experts. On the other hand, inverse reinforcement learning (IRL) seeks to learn a reward function from readily-obtained human demonstrations. Yet, IRL suffers from two major limitations: 1) reward ambiguity - there are an infinite number of possible reward functions that could explain an expert's demonstration and 2) heterogeneity - human experts adopt varying strategies and preferences, which makes learning from multiple demonstrators difficult due to the common assumption that demonstrators seeks to maximize the same reward. In this work, we propose a method to jointly infer a task goal and humans' strategic preferences via network distillation. This approach enables us to distill a robust task reward (addressing reward ambiguity) and to model each strategy's objective (handling heterogeneity). We demonstrate our algorithm can better recover task reward and strategy rewards and imitate the strategies in two simulated tasks and a real-world table tennis task.
In text/plain format

Archived Files and Locations

application/pdf  2.9 MB
file_tgpf72af3rf7vk4xncsm4rkhpu
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   accepted
Date   2020-01-03
Version   v2
Language   en ?
arXiv  2001.00503v2
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 64ecfc78-b9fb-44ad-b41b-df87662665bd
API URL: JSON