Hierarchical Reinforcement Learning for Sensor-Based Navigation
release_yat2btwwwrh4njr2nm3vg3o4wq
by
Christopher Gebauer, Maren Bennewitz
2021
Abstract
Robotic systems are nowadays capable of solving complex navigation tasks
under real-world conditions. However, their capabilities are intrinsically
limited to the imagination of the designer and consequently lack
generalizability to initially unconsidered situations. This makes deep
reinforcement learning especially interesting, as these algorithms promise a
self-learning system only relying on feedback from the environment. Having the
system itself search for an optimal solution brings the benefit of great
generalization or even constant improvement when life-long learning is
addressed. In this paper, we address robot navigation in continuous action
space using deep hierarchical reinforcement learning without including the
target location in the state representation. Our agent self-assigns internal
goals and learns to extract reasonable waypoints to reach the desired target
position only based on local sensor data. In our experiments we demonstrate
that our hierarchical structure improves the performance of the navigation
agent in terms of collected reward and success rate in comparison to a flat
structure, while not requiring any global or target information.
In text/plain
format
Archived Files and Locations
application/pdf 3.7 MB
file_bbtlhfnljvh3pcg5m5yr6gfqpy
|
arxiv.org (repository) web.archive.org (webarchive) |
2108.13268v1
access all versions, variants, and formats of this works (eg, pre-prints)