Safe Model-based Reinforcement Learning with Stability Guarantees
release_frl2fqab45ggrdp655utsz3khe
by
Felix Berkenkamp, Matteo Turchetta, Angela P. Schoellig, Andreas
Krause
2017
Abstract
Reinforcement learning is a powerful paradigm for learning optimal policies
from experimental data. However, to find optimal policies, most reinforcement
learning algorithms explore all possible actions, which may be harmful for
real-world systems. As a consequence, learning algorithms are rarely applied on
safety-critical systems in the real world. In this paper, we present a learning
algorithm that explicitly considers safety, defined in terms of stability
guarantees. Specifically, we extend control-theoretic results on Lyapunov
stability verification and show how to use statistical models of the dynamics
to obtain high-performance control policies with provable stability
certificates. Moreover, under additional regularity assumptions in terms of a
Gaussian process prior, we prove that one can effectively and safely collect
data in order to learn about the dynamics and thus both improve control
performance and expand the safe region of the state space. In our experiments,
we show how the resulting algorithm can safely optimize a neural network policy
on a simulated inverted pendulum, without the pendulum ever falling down.
In text/plain
format
Archived Files and Locations
application/pdf 1.1 MB
file_s53lwhh2fjaz3kpbp7qyfjaiqi
|
arxiv.org (repository) web.archive.org (webarchive) |
1705.08551v2
access all versions, variants, and formats of this works (eg, pre-prints)