Super Odometry: IMU-centric LiDAR-Visual-Inertial Estimator for Challenging Environments
release_rev_05fccd7c-4266-4d96-9578-d277975d42af
by
Shibo Zhao, Hengrui Zhang, Peng Wang, Lucas Nogueira, Sebastian Scherer
2021
Abstract
We propose Super Odometry, a high-precision multi-modal sensor fusion
framework, providing a simple but effective way to fuse multiple sensors such
as LiDAR, camera, and IMU sensors and achieve robust state estimation in
perceptually-degraded environments. Different from traditional sensor-fusion
methods, Super Odometry employs an IMU-centric data processing pipeline, which
combines the advantages of loosely coupled methods with tightly coupled methods
and recovers motion in a coarse-to-fine manner. The proposed framework is
composed of three parts: IMU odometry, visual-inertial odometry, and
laser-inertial odometry. The visual-inertial odometry and laser-inertial
odometry provide the pose prior to constrain the IMU bias and receive the
motion prediction from IMU odometry. To ensure high performance in real-time,
we apply a dynamic octree that only consumes 10
with a static KD-tree. The proposed system was deployed on drones and ground
robots, as part of Team Explorer's effort to the DARPA Subterranean Challenge
where the team won 1^st and 2^nd place in the Tunnel and Urban
Circuits, respectively.
In text/plain
format
2104.14938v1
access all versions, variants, and formats of this works (eg, pre-prints)
This is a specific, static metadata record, not necessarily linked to any current entity in the catalog.