Safe Reinforcement Learning with Chance-constrained Model Predictive Control
release_g2ciosj57zhprkgkpvxboj55jq
by
Samuel Pfrommer, Tanmay Gautam, Alec Zhou, Somayeh Sojoudi
2021
Abstract
Real-world reinforcement learning (RL) problems often demand that agents
behave safely by obeying a set of designed constraints. We address the
challenge of safe RL by coupling a safety guide based on model predictive
control (MPC) with a modified policy gradient framework in a linear setting
with continuous actions. The guide enforces safe operation of the system by
embedding safety requirements as chance constraints in the MPC formulation. The
policy gradient training step then includes a safety penalty which trains the
base policy to behave safely. We show theoretically that this penalty allows
for the safety guide to be removed after training and illustrate our method
using experiments with a simulator quadrotor.
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2112.13941v1
access all versions, variants, and formats of this works (eg, pre-prints)