Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs
release_zdiiudegzvewjhmc2unvhhiukm
by
Haithem Turki, Deva Ramanan, Mahadev Satyanarayanan
2022
Abstract
We use neural radiance fields (NeRFs) to build interactive 3D environments
from large-scale visual captures spanning buildings or even multiple city
blocks collected primarily from drones. In contrast to single object scenes (on
which NeRFs are traditionally evaluated), our scale poses multiple challenges
including (1) the need to model thousands of images with varying lighting
conditions, each of which capture only a small subset of the scene, (2)
prohibitively large model capacities that make it infeasible to train on a
single GPU, and (3) significant challenges for fast rendering that would enable
interactive fly-throughs.
To address these challenges, we begin by analyzing visibility statistics for
large-scale scenes, motivating a sparse network structure where parameters are
specialized to different regions of the scene. We introduce a simple geometric
clustering algorithm for data parallelism that partitions training images (or
rather pixels) into different NeRF submodules that can be trained in parallel.
We evaluate our approach on existing datasets (Quad 6k and UrbanScene3D) as
well as against our own drone footage, improving training speed by 3x and PSNR
by 12%. We also evaluate recent NeRF fast renderers on top of Mega-NeRF and
introduce a novel method that exploits temporal coherence. Our technique
achieves a 40x speedup over conventional NeRF rendering while remaining within
0.8 db in PSNR quality, exceeding the fidelity of existing fast renderers.
In text/plain
format
Archived Files and Locations
application/pdf 10.9 MB
file_e4e7oh36izcu5nkqbgvskzg72y
|
arxiv.org (repository) web.archive.org (webarchive) |
2112.10703v2
access all versions, variants, and formats of this works (eg, pre-prints)