An Auto-tuning Framework for Autonomous Vehicles
release_dlbidbwuwjh4hdo4zgcqwsknvy
by
Haoyang Fan, Zhongpu Xia, Changchun Liu, Yaqin Chen, Qi Kong
2018
Abstract
Many autonomous driving motion planners generate trajectories by optimizing a
reward/cost functional. Designing and tuning a high-performance reward/cost
functional for Level-4 autonomous driving vehicles with exposure to different
driving conditions is challenging. Traditionally, reward/cost functional tuning
involves substantial human effort and time spent on both simulations and road
tests. As the scenario becomes more complicated, tuning to improve the motion
planner performance becomes increasingly difficult. To systematically solve
this issue, we develop a data-driven auto-tuning framework based on the Apollo
autonomous driving framework. The framework includes a novel rank-based
conditional inverse reinforcement learning algorithm, an offline training
strategy and an automatic method of collecting and labeling data. Our
auto-tuning framework has the following advantages that make it suitable for
tuning an autonomous driving motion planner. First, compared to that of most
inverse reinforcement learning algorithms, our algorithm training is efficient
and capable of being applied to different scenarios. Second, the offline
training strategy offers a safe way to adjust the parameters before public road
testing. Third, the expert driving data and information about the surrounding
environment are collected and automatically labeled, which considerably reduces
the manual effort. Finally, the motion planner tuned by the framework is
examined via both simulation and public road testing and is shown to achieve
good performance.
In text/plain
format
Archived Files and Locations
application/pdf 580.7 kB
file_ty6tiqqwtra37lnr2pibtxa42a
|
arxiv.org (repository) web.archive.org (webarchive) |
1808.04913v1
access all versions, variants, and formats of this works (eg, pre-prints)