Meshing Point Clouds with Predicted Intrinsic-Extrinsic Ratio Guidance release_4qqa6iq55jf7vnz2hom4msxvii

by Minghua Liu, Xiaoshuai Zhang, Hao Su

Released as a article .

2020  

Abstract

We are interested in reconstructing the mesh representation of object surfaces from point clouds. Surface reconstruction is a prerequisite for downstream applications such as rendering, collision avoidance for planning, animation, etc. However, the task is challenging if the input point cloud has a low resolution, which is common in real-world scenarios (e.g., from LiDAR or Kinect sensors). Existing learning-based mesh generative methods mostly predict the surface by first building a shape embedding that is at the whole object level, a design that causes issues in generating fine-grained details and generalizing to unseen categories. Instead, we propose to leverage the input point cloud as much as possible, by only adding connectivity information to existing points. Particularly, we predict which triplets of points should form faces. Our key innovation is a surrogate of local connectivity, calculated by comparing the intrinsic/extrinsic metrics. We learn to predict this surrogate using a deep point cloud network and then feed it to an efficient post-processing module for high-quality mesh generation. We demonstrate that our method can not only preserve details, handle ambiguous structures, but also possess strong generalizability to unseen categories by experiments on synthetic and real data.
In text/plain format

Archived Files and Locations

application/pdf  28.3 MB
file_2aiu4m3h45bzxmurenox3fe2pm
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2020-07-17
Version   v1
Language   en ?
arXiv  2007.09267v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: b3fa1116-8290-47f0-9a2f-eb066f6116bd
API URL: JSON