Adaptive Behavior Generation for Autonomous Driving using Deep Reinforcement Learning with Compact Semantic States release_u55npj4ot5ffni7edmcfs373uq

by Peter Wolf, Karl Kurzer, Tobias Wingert, Florian Kuhnt, J. Marius Zöllner

Released as a article .

2018  

Abstract

Making the right decision in traffic is a challenging task that is highly dependent on individual preferences as well as the surrounding environment. Therefore it is hard to model solely based on expert knowledge. In this work we use Deep Reinforcement Learning to learn maneuver decisions based on a compact semantic state representation. This ensures a consistent model of the environment across scenarios as well as a behavior adaptation function, enabling on-line changes of desired behaviors without re-training. The input for the neural network is a simulated object list similar to that of Radar or Lidar sensors, superimposed by a relational semantic scene description. The state as well as the reward are extended by a behavior adaptation function and a parameterization respectively. With little expert knowledge and a set of mid-level actions, it can be seen that the agent is capable to adhere to traffic rules and learns to drive safely in a variety of situations.
In text/plain format

Archived Files and Locations

application/pdf  729.1 kB
file_ysmrhkcb3rdjlb7vofb3y6c7r4
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-09-10
Version   v1
Language   en ?
arXiv  1809.03214v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: a7030c4c-d26c-40e3-a70a-7c72cb009934
API URL: JSON