Urban Driving with Conditional Imitation Learning
release_uy4sl6gamvga3ji5byxsabzqoe
by
Jeffrey Hawke, Richard Shen, Corina Gurau, Siddharth Sharma, Daniele
Reda, Nikolay Nikolov, Przemyslaw Mazur, Sean Micklethwaite, Nicolas
Griffiths, Amar Shah, Alex Kendall
2019
Abstract
Hand-crafting generalised decision-making rules for real-world urban
autonomous driving is hard. Alternatively, learning behaviour from
easy-to-collect human driving demonstrations is appealing. Prior work has
studied imitation learning (IL) for autonomous driving with a number of
limitations. Examples include only performing lane-following rather than
following a user-defined route, only using a single camera view or heavily
cropped frames lacking state observability, only lateral (steering) control,
but not longitudinal (speed) control and a lack of interaction with traffic.
Importantly, the majority of such systems have been primarily evaluated in
simulation - a simple domain, which lacks real-world complexities. Motivated by
these challenges, we focus on learning representations of semantics, geometry
and motion with computer vision for IL from human driving demonstrations. As
our main contribution, we present an end-to-end conditional imitation learning
approach, combining both lateral and longitudinal control on a real vehicle for
following urban routes with simple traffic. We address inherent dataset bias by
data balancing, training our final policy on approximately 30 hours of
demonstrations gathered over six months. We evaluate our method on an
autonomous vehicle by driving 35km of novel routes in European urban streets.
In text/plain
format
Archived Files and Locations
application/pdf 7.7 MB
file_kq2tbt7xrjgpbdd7plt6aydmyu
|
arxiv.org (repository) web.archive.org (webarchive) |
1912.00177v1
access all versions, variants, and formats of this works (eg, pre-prints)