Advancements in Autonomous Driving and 3D Reconstruction

The field of autonomous driving and 3D reconstruction is rapidly advancing with a focus on improving the accuracy and efficiency of motion planning, depth reconstruction, and scene rendering. Researchers are exploring novel approaches that combine the strengths of different methods, such as leveraging Radial Basis Function Networks for motion planning and using Gaussian Splatting for depth reconstruction. These advancements have the potential to enable more precise and robust autonomous driving systems. Notably, innovative methods are being proposed to address challenging scenarios, such as reconstructing transparent objects and handling sparse-view inputs. Furthermore, the development of robust pose estimation architectures and selective photometric losses is enhancing the quality of 3D reconstructions. Some notable papers include: LidarPainter, which enables high-fidelity lane shifts in driving scene reconstruction, and AD-GS, which introduces a novel self-supervised framework for high-quality free-viewpoint rendering of driving scenes. Physically Based Neural LiDAR Resimulation is also noteworthy for its advanced resimulation capabilities.

Sources

MP-RBFN: Learning-based Vehicle Motion Primitives using Radial Basis Function Networks

TRAN-D: 2D Gaussian Splatting-based Sparse-view Transparent Object Depth Reconstruction via Physics Simulation for Scene Update

BRUM: Robust 3D Vehicle Reconstruction from 360 Sparse Images

LidarPainter: One-Step Away From Any Lidar View To Novel Guidance

AD-GS: Object-Aware B-Spline Gaussian Splatting for Self-Supervised Autonomous Driving

Physically Based Neural LiDAR Resimulation

Built with on top of