Advances in 3D Reconstruction and Scene Understanding

The field of 3D reconstruction and scene understanding is rapidly advancing, with a focus on developing more efficient, accurate, and generalizable methods. Recent research has explored the use of neural implicit surfaces, Gaussian splatting, and deep learning-based approaches to improve the quality and robustness of 3D reconstructions. These methods have shown promising results in handling challenging scenarios such as sparse views, low-quality images, and large parallax. Notable papers in this area include SparseRecon, which proposes a novel neural implicit reconstruction method for sparse views, and RobustGS, which introduces a general and efficient multi-view feature enhancement module to improve the robustness of feedforward 3DGS methods. Additionally, H3R and Uni3R have demonstrated state-of-the-art performance in generalizable 3D reconstruction and unified 3D scene reconstruction and understanding, respectively. Other notable papers include PIS3R, MuGS, Surf3R, and PixCuboid, which have made significant contributions to the field of 3D reconstruction and scene understanding.

Sources

SparseRecon: Neural Implicit Surface Reconstruction from Sparse Views with Feature and Depth Consistencies

RobustGS: Unified Boosting of Feedforward 3D Gaussian Splatting under Low-Quality Conditions

H3R: Hybrid Multi-view Correspondence for Generalizable 3D Reconstruction

Uni3R: Unified 3D Reconstruction and Semantic Understanding via Generalizable Gaussian Splatting from Unposed Multi-View Images

PIS3R: Very Large Parallax Image Stitching via Deep 3D Reconstruction

MuGS: Multi-Baseline Generalizable Gaussian Splatting Reconstruction

Deep Learning-based Scalable Image-to-3D Facade Parser for Generating Thermal 3D Building Models

Surf3R: Rapid Surface Reconstruction from Sparse RGB Views in Seconds

Pseudo Depth Meets Gaussian: A Feed-forward RGB SLAM Baseline

PixCuboid: Room Layout Estimation from Multi-view Featuremetric Alignment

Built with on top of