Advances in 3D Reconstruction and Rendering

The field of 3D reconstruction and rendering is rapidly advancing, with a focus on improving efficiency, accuracy, and visual fidelity. Recent developments have seen the introduction of novel methods for 3D Gaussian splatting, such as directional consistency-driven adaptive density control and self-adaptive alias-free Gaussian splatting, which have greatly reduced the number of primitives required and enhanced reconstruction fidelity. Additionally, generative AI frameworks have been proposed for rapid 3D heritage reconstruction from street view imagery, demonstrating significant speedups and cost savings. Other notable advancements include the development of controllable 4D scene generation methods, such as Diff4Splat, and the introduction of learnable fractional reaction-diffusion dynamics for under-display ToF imaging. Furthermore, researchers have made progress in improving multi-view reconstruction via texture-guided Gaussian-mesh joint optimization and have proposed novel methods for 3D voxel representation and reconstruction. Noteworthy papers include DC4GS, which reduces the number of primitives required for 3D Gaussian splatting, and SAGS, which achieves superior performance in deformable tissue reconstruction. Oitijjo-3D is also notable for its ability to reconstruct 3D models of heritage structures from street view imagery. Overall, these advancements have the potential to significantly impact various fields, including robotics, healthcare, and cultural preservation.

Sources

DC4GS: Directional Consistency-Driven Adaptive Density Control for 3D Gaussian Splatting

SAGS: Self-Adaptive Alias-Free Gaussian Splatting for Dynamic Surgical Endoscopic Reconstruction

Oitijjo-3D: Generative AI Framework for Rapid 3D Heritage Reconstruction from Street View Imagery

Diff4Splat: Controllable 4D Scene Generation with Latent Dynamic Reconstruction Models

Multi-Mapcher: Loop Closure Detection-Free Heterogeneous LiDAR Multi-Session SLAM Leveraging Outlier-Robust Registration for Autonomous Vehicles

4D Neural Voxel Splatting: Dynamic Scene Rendering with Voxelized Guassian Splatting

Learnable Fractional Reaction-Diffusion Dynamics for Under-Display ToF Imaging and Beyond

Wonder3D++: Cross-domain Diffusion for High-fidelity 3D Generation from a Single Image

TurboMap: GPU-Accelerated Local Mapping for Visual SLAM

Can Foundation Models Revolutionize Mobile AR Sparse Sensing?

A Novel Grouping-Based Hybrid Color Correction Algorithm for Color Point Clouds

LiteVoxel: Low-memory Intelligent Thresholding for Efficient Voxel Rasterization

Improving Multi-View Reconstruction via Texture-Guided Gaussian-Mesh Joint Optimization

A Linear Fractional Transformation Model and Calibration Method for Light Field Camera

CaRF: Enhancing Multi-View Consistency in Referring 3D Gaussian Splatting Segmentation

Near-Lossless 3D Voxel Representation Free from Iso-surface

FastGS: Training 3D Gaussian Splatting in 100 Seconds

UniSplat: Unified Spatio-Temporal Fusion via 3D Latent Scaffolds for Dynamic Driving Scene Reconstruction

Built with on top of