Learning Abstract and Transferable Representations for Planning
release_cbu5i3j7gbdbpiidircx5zdsu4
by
Steven James, Benjamin Rosman, George Konidaris
2022
Abstract
We are concerned with the question of how an agent can acquire its own
representations from sensory data. We restrict our focus to learning
representations for long-term planning, a class of problems that
state-of-the-art learning methods are unable to solve. We propose a framework
for autonomously learning state abstractions of an agent's environment, given a
set of skills. Importantly, these abstractions are task-independent, and so can
be reused to solve new tasks. We demonstrate how an agent can use an existing
set of options to acquire representations from ego- and object-centric
observations. These abstractions can immediately be reused by the same agent in
new environments. We show how to combine these portable representations with
problem-specific ones to generate a sound description of a specific task that
can be used for abstract planning. Finally, we show how to autonomously
construct a multi-level hierarchy consisting of increasingly abstract
representations. Since these hierarchies are transferable, higher-order
concepts can be reused in new tasks, relieving the agent from relearning them
and improving sample efficiency. Our results demonstrate that our approach
allows an agent to transfer previous knowledge to new tasks, improving sample
efficiency as the number of tasks increases.
In text/plain
format
Archived Files and Locations
application/pdf 426.1 kB
file_3vsvedepkvb5hpbhxxzzdvoa64
|
arxiv.org (repository) web.archive.org (webarchive) |
2205.02092v1
access all versions, variants, and formats of this works (eg, pre-prints)