The field of autonomous driving and 3D reconstruction is rapidly advancing with a focus on improving the accuracy and efficiency of motion planning, depth reconstruction, and scene rendering. Researchers are exploring novel approaches that combine the strengths of different methods, such as leveraging Radial Basis Function Networks for motion planning and using Gaussian Splatting for depth reconstruction. These advancements have the potential to enable more precise and robust autonomous driving systems. Notably, innovative methods are being proposed to address challenging scenarios, such as reconstructing transparent objects and handling sparse-view inputs. Furthermore, the development of robust pose estimation architectures and selective photometric losses is enhancing the quality of 3D reconstructions. Some notable papers include: LidarPainter, which enables high-fidelity lane shifts in driving scene reconstruction, and AD-GS, which introduces a novel self-supervised framework for high-quality free-viewpoint rendering of driving scenes. Physically Based Neural LiDAR Resimulation is also noteworthy for its advanced resimulation capabilities.