Advances in 3D Reconstruction and Relighting

The field of 3D reconstruction and relighting is rapidly advancing, with a focus on developing innovative methods for reconstructing and rendering 3D objects and scenes from 2D images. Recent work has explored the use of generative priors, kinematic-aware diffusion models, and inverse image-based rendering to improve the accuracy and fidelity of 3D reconstructions and relighting results. These approaches have shown significant promise in preserving identity, capturing lighting effects, and estimating kinematic properties of articulated objects. Notable papers in this area include: 3DPR, which proposes a novel image-based relighting model that leverages generative priors to achieve physically accurate environmental relighting results. KineDiff3D, which introduces a unified framework for reconstructing diverse articulated instances and pose estimation from single view input. Inverse Image-Based Rendering, which proposes a novel view synthesis method for light field generation from single images. CUPID, which presents a pose-grounded generative 3D reconstruction method that accurately infers camera pose, 3D shape, and texture of an object from a single 2D image.

Sources

3DPR: Single Image 3D Portrait Relight using Generative Priors

KineDiff3D: Kinematic-Aware Diffusion for Category-Level Articulated Object Shape Reconstruction and Generation

Inverse Image-Based Rendering for Light Field Generation from Single Images

CUPID: Pose-Grounded Generative 3D Reconstruction from a Single Image

Built with on top of