Current Trends in 3D Reconstruction and Novel View Synthesis

The field of 3D reconstruction and novel view synthesis is rapidly advancing, driven by the development of new techniques such as 3D Gaussian Splatting, Neural Radiance Fields, and multi-view stereo. These methods have enabled the creation of highly realistic and detailed 3D models from 2D images, with applications in fields such as computer vision, robotics, and virtual reality. Recent research has focused on improving the efficiency and accuracy of these techniques, particularly in the context of large-scale scenes and dynamic environments. Notably, the use of deep learning-based approaches has led to significant improvements in the quality and fidelity of 3D reconstructions. Furthermore, the integration of 3D reconstruction with other computer vision tasks, such as object detection and tracking, has enabled the creation of more comprehensive and accurate scene understanding systems. Some noteworthy papers in this area include the TexGS-VolVis framework for expressive scene editing, the PCR-GS technique for pose-free 3D Gaussian Splatting, and the TimeNeRF approach for generalizable neural radiance fields. Overall, the field of 3D reconstruction and novel view synthesis is rapidly evolving, with new techniques and applications emerging continuously.

Sources

Multiresolution local smoothness detection in non-uniformly sampled multivariate signals

TexGS-VolVis: Expressive Scene Editing for Volume Visualization via Textured Gaussian Splatting

Augmented Reality in Cultural Heritage: A Dual-Model Pipeline for 3D Artwork Reconstruction

PCR-GS: COLMAP-Free 3D Gaussian Splatting via Pose Co-Regularizations

TimeNeRF: Building Generalizable Neural Radiance Fields across Time from Few-Shot Input Views

Comparative Analysis of Algorithms for the Fitting of Tessellations to 3D Image Data

Adaptive 3D Gaussian Splatting Video Streaming

Adaptive 3D Gaussian Splatting Video Streaming: Visual Saliency-Aware Tiling and Meta-Learning-Based Bitrate Adaptation

Advances in Feed-Forward 3D Reconstruction and View Synthesis: A Survey

Real-Time Scene Reconstruction using Light Field Probes

Towards Geometric and Textural Consistency 3D Scene Generation via Single Image-guided Model Generation and Layout Optimization

Stereo-GS: Multi-View Stereo Vision Model for Generalizable 3D Gaussian Splatting Reconstruction

GCC: A 3DGS Inference Architecture with Gaussian-Wise and Cross-Stage Conditional Processing

Blended Point Cloud Diffusion for Localized Text-guided Shape Editing

ObjectGS: Object-aware Scene Reconstruction and Scene Understanding via Gaussian Splatting

SurfaceSplat: Connecting Surface Reconstruction and Gaussian Splatting

CylinderPlane: Nested Cylinder Representation for 3D-aware Image Generation

Gaussian Splatting with Discretized SDF for Relightable Assets

Point Cloud Streaming with Latency-Driven Implicit Adaptation using MoQ

DWTGS: Rethinking Frequency Regularization for Sparse-view 3D Gaussian Splatting

Appearance Harmonization via Bilateral Grid Prediction with Transformers for 3DGS

LongSplat: Online Generalizable 3D Gaussian Splatting from Long Sequence Images

VGGT-Long: Chunk it, Loop it, Align it -- Pushing VGGT's Limits on Kilometer-scale Long RGB Sequences

Temporal Smoothness-Aware Rate-Distortion Optimized 4D Gaussian Splatting

RemixFusion: Residual-based Mixed Representation for Large-scale Online RGB-D Reconstruction

High-fidelity 3D Gaussian Inpainting: preserving multi-view consistency and photorealistic details

PS-GS: Gaussian Splatting for Multi-View Photometric Stereo

LONG3R: Long Sequence Streaming 3D Reconstruction

MVG4D: Image Matrix-Based Multi-View and Motion Generation for 4D Content Creation from a Single Image

CRUISE: Cooperative Reconstruction and Editing in V2X Scenarios using Gaussian Splatting

Unposed 3DGS Reconstruction with Probabilistic Procrustes Mapping

Built with on top of