ArticulatedFusion: Real-time Reconstruction of Motion, Geometry and
Segmentation Using a Single Depth Camera
release_zoxzmuandbfzto4vlv2xvsh6ye
by
Chao Li and Zheheng Zhao and Xiaohu Guo
2018
Abstract
This paper proposes a real-time dynamic scene reconstruction method capable
of reproducing the motion, geometry, and segmentation simultaneously given live
depth stream from a single RGB-D camera. Our approach fuses geometry frame by
frame and uses a segmentation-enhanced node graph structure to drive the
deformation of geometry in registration step. A two-level node motion
optimization is proposed. The optimization space of node motions and the range
of physically-plausible deformations are largely reduced by taking advantage of
the articulated motion prior, which is solved by an efficient node graph
segmentation method. Compared to previous fusion-based dynamic scene
reconstruction methods, our experiments show robust and improved reconstruction
results for tangential and occluded motions.
In text/plain
format
Archived Files and Locations
application/pdf 9.8 MB
file_xczkj6suljctljuf7lf365unai
|
arxiv.org (repository) web.archive.org (webarchive) |
1807.07243v1
access all versions, variants, and formats of this works (eg, pre-prints)