NeRF-W|NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

NeRF-W|NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

Jan 07, 2021
|
51 views
|
Details
We present NeRF-W, a system for 3D reconstruction of landmarks from unconstrained, "in-the-wild" photo collections. Given a set of posed photos, NeRF-W is able to disentangle the shared, underlying 3D geometry from transient objects and photometric variations, producing a consistent, photorealistic scene representation that can be rendered from novel viewpoints. Ricardo Martin-Brualla*, Noha Radwan*, Mehdi S. M. Sajjadi*, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth (* Denotes equal contribution) Paper Abstract: We present a learning-based method for synthesizing novel views of complex outdoor scenes using only unstructured collections of in-the-wild photographs. We build on neural radiance fields (NeRF), which uses the weights of a multilayer perceptron to implicitly model the volumetric density and color of a scene. While NeRF works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or transient occluders. In this work, we introduce a series of extensions to NeRF to address these issues, thereby allowing for accurate reconstructions from unstructured image collections taken from the internet. We apply our system, which we dub NeRF-W, to internet photo collections of famous landmarks, thereby producing photorealistic, spatially consistent scene representations despite unknown and confounding factors, resulting in significant improvement over the state of the art.

Comments
loading...