A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis

submited by
Style Pass
2024-11-27 15:30:28

Our method produces relightable radiance fields directly from single-illumination multi-view dataset, by using priors from generative data in the place of an actual multi-illumination capture. It is composed of three main parts. First, we create a 2D relighting neural network with direct control of lighting direction. Second, we use this network to transform a multi-view capture with single lighting into a virtual multi-lighting capture. Finally, we create a relightable radiance field that accounts for inaccuracies in the synthesized relit input images and provides a multi-view consistent lighting solution.

Since it does not rely on accurate geometry and surface normals, as compared to many prior works our method is better at handling cluttered scenes with complex geometry and reflective BRDFs. We compare to Outcast, Relightable 3D Gaussians, and TensoIR.

Radiance fields like 3DGS rely on multi-view consistency, and breaking it introduces additional floaters and holes in surfaces.

Leave a Comment