A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis
release_owlxlf5fizdrpn3w26jalt5b6i
by
Xingang Pan, Xudong Xu, Chen Change Loy, Christian Theobalt, Bo Dai
2021
Abstract
The advancement of generative radiance fields has pushed the boundary of
3D-aware image synthesis. Motivated by the observation that a 3D object should
look realistic from multiple viewpoints, these methods introduce a multi-view
constraint as regularization to learn valid 3D radiance fields from 2D images.
Despite the progress, they often fall short of capturing accurate 3D shapes due
to the shape-color ambiguity, limiting their applicability in downstream tasks.
In this work, we address this ambiguity by proposing a novel shading-guided
generative implicit model that is able to learn a starkly improved shape
representation. Our key insight is that an accurate 3D shape should also yield
a realistic rendering under different lighting conditions. This multi-lighting
constraint is realized by modeling illumination explicitly and performing
shading with various lighting conditions. Gradients are derived by feeding the
synthesized images to a discriminator. To compensate for the additional
computational burden of calculating surface normals, we further devise an
efficient volume rendering strategy via surface tracking, reducing the training
and inference time by 24% and 48%, respectively. Our experiments on multiple
datasets show that the proposed approach achieves photorealistic 3D-aware image
synthesis while capturing accurate underlying 3D shapes. We demonstrate
improved performance of our approach on 3D shape reconstruction against
existing methods, and show its applicability on image relighting. Our code will
be released at https://github.com/XingangPan/ShadeGAN.
In text/plain
format
Archived Files and Locations
application/pdf 3.2 MB
file_gswwixjkyvbvtauduqqhcm3inm
|
arxiv.org (repository) web.archive.org (webarchive) |
2110.15678v1
access all versions, variants, and formats of this works (eg, pre-prints)