NeRD: Neural Reflectance Decomposition from Image Collections
release_3limz6ihizecpai6es43ldorbm
by
Mark Boss, Raphael Braun, Varun Jampani, Jonathan T. Barron, Ce Liu, Hendrik P.A. Lensch
2021
Abstract
Decomposing a scene into its shape, reflectance, and illumination is a
challenging but essential problem in computer vision and graphics. This problem
is inherently more challenging when the illumination is not a single light
source under laboratory conditions but is instead an unconstrained
environmental illumination. Though recent work has shown that implicit
representations can be used to model the radiance field of an object, these
techniques only enable view synthesis and not relighting. Additionally,
evaluating these radiance fields is resource and time-intensive. By decomposing
a scene into explicit representations, any rendering framework can be leveraged
to generate novel views under any illumination in real-time. NeRD is a method
that achieves this decomposition by introducing physically-based rendering to
neural radiance fields. Even challenging non-Lambertian reflectances, complex
geometry, and unknown illumination can be decomposed into high-quality models.
The datasets and code is available on the project page:
https://markboss.me/publication/2021-nerd/
In text/plain
format
Archived Content
There are no accessible files associated with this release. You could check other releases for this work for an accessible version.
Know of a fulltext copy of on the public web? Submit a URL and we will archive it
2012.03918v3
access all versions, variants, and formats of this works (eg, pre-prints)