Decomposing a scene into its shape, reflectance and illumination is a fundamental problem in computer vision and graphics. Neural approaches such as N

Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition

submited by
Style Pass
2021-10-28 03:00:06

Decomposing a scene into its shape, reflectance and illumination is a fundamental problem in computer vision and graphics. Neural approaches such as NeRF have achieved remarkable success in view synthesis, but do not explicitly perform decomposition and instead operate exclusively on radiance (the product of reflectance and illumination). Extensions to NeRF, such as NeRD, can perform decomposition but struggle to accurately recover detailed illumination, thereby significantly limiting realism. We propose a novel reflectance decomposition network that can estimate shape, BRDF and per-image illumination given a set of object images captured under varying illumination. Our key technique is a novel illumination integration network called Neural-PIL that replaces a costly illumination integral operation in the rendering with a simple network query. In addition, we also learn deep low-dimensional priors on BRDF and illumination representations using novel smooth manifold auto-encoders. Our decompositions can result in considerably better BRDF and light estimates enabling more accurate novel view-synthesis and relighting compared to prior art. Decompositions produced by our technique are compatible with any conventional 3D game engine or rendering engine and thus can be used for photorealistic real-time view synthesis and relighting. We will release the code upon publication.

Besides the general NeRF Explosion of 2020, a subfield of introducing explicit material representations in to neural volume representation emerged with papers such as NeRD, NeRV, Neural Reflectance Fields for Appearance Acquisition, PhySG or NeRFactor. The way illumination is represented varies drastically between the methods. Either the methods focus on single-point lights such as in Neural Reflectance Fields for Appearance Acquisition, it is assumed to be known (NeRV), it is extracted from a trained NeRF as an illumination map (NeRFactor), or it is represented as Spherical Gaussians (NeRD and PhySG). It is also worth pointing out that nearly all methods focus on a single illumination per scene, except NeRD.

Leave a Comment