Novel View Synthesis and 3D Scene Representation

The field of computer vision is witnessing significant advancements in novel view synthesis and 3D scene representation. Researchers are exploring innovative methods to generate high-quality novel views from limited perspectives, addressing challenges such as non-uniform observations and visibility mismatches. Recent developments have focused on leveraging 3D Gaussian Splatting, a powerful and efficient 3D representation, to improve rendering quality and scalability. Noteworthy papers include those proposing visibility-uncertainty-guided 3D Gaussian inpainting and renderability field-guided Gaussian splatting, which demonstrate superior performance over state-of-the-art techniques. Another notable approach is the use of connectivity-enhanced neural point-based graphics for novel view synthesis in large-scale autonomous driving scenes, which improves rendering quality and scalability. Additionally, researchers are investigating potential security vulnerabilities in 3D Gaussian Splatting pipelines and developing active 3D reconstruction systems that quantify visual uncertainty for efficient and effective acquisition of input images. Notable papers in this regard are GaussTrap, which presents a systematic study of backdoor threats in 3DGS pipelines, and GauSS-MI, which introduces a probabilistic model for real-time assessment of visual mutual information from novel viewpoints.

Sources

Visibility-Uncertainty-guided 3D Gaussian Inpainting via Scene Conceptional Learning

Rendering Anywhere You See: Renderability Field-guided Gaussian Splatting

CE-NPBG: Connectivity Enhanced Neural Point-Based Graphics for Novel View Synthesis in Autonomous Driving Scenes

GaussTrap: Stealthy Poisoning Attacks on 3D Gaussian Splatting for Targeted Scene Confusion

GauSS-MI: Gaussian Splatting Shannon Mutual Information for Active 3D Reconstruction

Built with on top of