DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization
release_yzzwgcz7tnedtnatqvshgoauxq
by
Juan Du, Rui Wang, Daniel Cremers
2020
Abstract
For relocalization in large-scale point clouds, we propose the first approach
that unifies global place recognition and local 6DoF pose refinement. To this
end, we design a Siamese network that jointly learns 3D local feature detection
and description directly from raw 3D points. It integrates FlexConv and
Squeeze-and-Excitation (SE) to assure that the learned local descriptor
captures multi-level geometric information and channel-wise relations. For
detecting 3D keypoints we predict the discriminativeness of the local
descriptors in an unsupervised manner. We generate the global descriptor by
directly aggregating the learned local descriptors with an effective attention
mechanism. In this way, local and global 3D descriptors are inferred in one
single forward pass. Experiments on various benchmarks demonstrate that our
method achieves competitive results for both global point cloud retrieval and
local point cloud registration in comparison to state-of-the-art approaches. To
validate the generalizability and robustness of our 3D keypoints, we
demonstrate that our method also performs favorably without fine-tuning on the
registration of point clouds that were generated by a visual SLAM system. Code
and related materials are available at
https://vision.in.tum.de/research/vslam/dh3d.
In text/plain
format
Archived Files and Locations
application/pdf 7.9 MB
file_5qn3v5r26ncu3fqqrkrcszjqke
|
arxiv.org (repository) web.archive.org (webarchive) |
2007.09217v1
access all versions, variants, and formats of this works (eg, pre-prints)