Grounding Language for Transfer in Deep Reinforcement Learning
release_atofepyaobgd3jckvgonompkxy
by
Karthik Narasimhan, Regina Barzilay, Tommi Jaakkola
2018 Volume 63, p849-874
Abstract



In this paper, we explore the utilization of natural language to drive transfer for reinforcement learning (RL). Despite the wide-spread application of deep RL techniques, learning generalized policy representations that work across domains remains a challenging problem. We demonstrate that textual descriptions of environments provide a compact intermediate channel to facilitate effective policy transfer. Specifically, by learning to ground the meaning of text to the dynamics of the environment such as transitions and rewards, an autonomous agent can effectively bootstrap policy learning on a new domain given its description. We employ a model-based RL approach consisting of a differentiable planning module, a model-free component and a factorized state representation to effectively use entity descriptions. Our model outperforms prior work on both transfer and multi-task scenarios in a variety of different environments. For instance, we achieve up to 14% and 11.5% absolute improvement over previously existing models in terms of average and initial rewards, respectively.


In application/xml+jats
format
Archived Files and Locations
application/pdf 2.5 MB
file_hm44kdlaf5bmtpck52uos5ux2q
|
jair.org (publisher) web.archive.org (webarchive) |
application/pdf 2.7 MB
file_6j2elkyaxvbllkeq3sjf3obpdm
|
dspace.mit.edu (web) web.archive.org (webarchive) |
article-journal
Stage
published
Date 2018-12-19
Open Access Publication
In DOAJ
In ISSN ROAD
Not in Keepers Registry
ISSN-L:
1076-9757
access all versions, variants, and formats of this works (eg, pre-prints)
Crossref Metadata (via API)
Worldcat
SHERPA/RoMEO (journal policies)
wikidata.org
CORE.ac.uk
Semantic Scholar
Google Scholar