DDNeRF: Depth Distribution Neural Radiance Fields release_ndcyqqw6zndb3cyj5i44ohua6e

by David Dadon, Ohad Fried, Yacov Hel-Or

Released as a article .

2022  

Abstract

In recent years, the field of implicit neural representation has progressed significantly. Models such as neural radiance fields (NeRF), which uses relatively small neural networks, can represent high-quality scenes and achieve state-of-the-art results for novel view synthesis. Training these types of networks, however, is still computationally very expensive. We present depth distribution neural radiance field (DDNeRF), a new method that significantly increases sampling efficiency along rays during training while achieving superior results for a given sampling budget. DDNeRF achieves this by learning a more accurate representation of the density distribution along rays. More specifically, we train a coarse model to predict the internal distribution of the transparency of an input volume in addition to the volume's total density. This finer distribution then guides the sampling procedure of the fine model. This method allows us to use fewer samples during training while reducing computational resources.
In text/plain format

Archived Files and Locations

application/pdf  11.3 MB
file_mrhnycteovg5devukt4vgpz6je
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2022-03-30
Version   v1
Language   en ?
arXiv  2203.16626v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 757ca6b4-a35b-4cd6-a3ca-137478850c17
API URL: JSON