DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction
from a Single Image
release_tdbjw6rn5bch3kuidapivqgsd4
by
Andrey Kurenkov, Jingwei Ji, Animesh Garg, Viraj Mehta, JunYoung Gwak,
Christopher Choy, Silvio Savarese
2017
Abstract
3D reconstruction from a single image is a key problem in multiple
applications ranging from robotic manipulation to augmented reality. Prior
methods have tackled this problem through generative models which predict 3D
reconstructions as voxels or point clouds. However, these methods can be
computationally expensive and miss fine details. We introduce a new
differentiable layer for 3D data deformation and use it in DeformNet to learn a
model for 3D reconstruction-through-deformation. DeformNet takes an image
input, searches the nearest shape template from a database, and deforms the
template to match the query image. We evaluate our approach on the ShapeNet
dataset and show that - (a) the Free-Form Deformation layer is a powerful new
building block for Deep Learning models that manipulate 3D data (b) DeformNet
uses this FFD layer combined with shape retrieval for smooth and
detail-preserving 3D reconstruction of qualitatively plausible point clouds
with respect to a single query image (c) compared to other state-of-the-art 3D
reconstruction methods, DeformNet quantitatively matches or outperforms their
benchmarks by significant margins. For more information, visit:
https://deformnet-site.github.io/DeformNet-website/ .
In text/plain
format
Archived Files and Locations
application/pdf 1.3 MB
file_ktg6wo2bgfe5dhwcdkyyugt3iq
|
arxiv.org (repository) web.archive.org (webarchive) |
1708.04672v1
access all versions, variants, and formats of this works (eg, pre-prints)