ArticulatedFusion: Real-time Reconstruction of Motion, Geometry and Segmentation Using a Single Depth Camera release_zoxzmuandbfzto4vlv2xvsh6ye

by Chao Li and Zheheng Zhao and Xiaohu Guo

Released as a article .

2018  

Abstract

This paper proposes a real-time dynamic scene reconstruction method capable of reproducing the motion, geometry, and segmentation simultaneously given live depth stream from a single RGB-D camera. Our approach fuses geometry frame by frame and uses a segmentation-enhanced node graph structure to drive the deformation of geometry in registration step. A two-level node motion optimization is proposed. The optimization space of node motions and the range of physically-plausible deformations are largely reduced by taking advantage of the articulated motion prior, which is solved by an efficient node graph segmentation method. Compared to previous fusion-based dynamic scene reconstruction methods, our experiments show robust and improved reconstruction results for tangential and occluded motions.
In text/plain format

Archived Files and Locations

application/pdf  9.8 MB
file_xczkj6suljctljuf7lf365unai
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-07-19
Version   v1
Language   en ?
arXiv  1807.07243v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 9306f27d-9b66-4072-a7e5-ba14ccc5c0bf
API URL: JSON