On the Practical Consistency of Meta-Reinforcement Learning Algorithms
release_esoasqbwajddzkqhhum536ijeu
by
Zheng Xiong, Luisa Zintgraf, Jacob Beck, Risto Vuorio, Shimon Whiteson
2021
Abstract
Consistency is the theoretical property of a meta learning algorithm that
ensures that, under certain assumptions, it can adapt to any task at test time.
An open question is whether and how theoretical consistency translates into
practice, in comparison to inconsistent algorithms. In this paper, we
empirically investigate this question on a set of representative meta-RL
algorithms. We find that theoretically consistent algorithms can indeed usually
adapt to out-of-distribution (OOD) tasks, while inconsistent ones cannot,
although they can still fail in practice for reasons like poor exploration. We
further find that theoretically inconsistent algorithms can be made consistent
by continuing to update all agent components on the OOD tasks, and adapt as
well or better than originally consistent ones. We conclude that theoretical
consistency is indeed a desirable property, and inconsistent meta-RL algorithms
can easily be made consistent to enjoy the same benefits.
In text/plain
format
Archived Files and Locations
application/pdf 1.2 MB
file_4ewhtjgodrc4fneyqzrnrq5oa4
|
arxiv.org (repository) web.archive.org (webarchive) |
2112.00478v1
access all versions, variants, and formats of this works (eg, pre-prints)