Advances in Synthetic Data Generation for Autonomous Driving

The field of autonomous driving is witnessing significant advancements in synthetic data generation, with a focus on creating high-quality, realistic, and diverse datasets for training and testing perception models. Recent developments have led to the creation of novel frameworks and methodologies that can generate dynamic 3D driving scenes, photorealistic simulations, and adversarial attacks. These innovations have the potential to improve the performance and robustness of autonomous driving systems. Noteworthy papers in this area include DriveGen3D, which introduces a unified pipeline for generating high-quality dynamic 3D driving scenes, and UNDREAM, which enables end-to-end optimization of adversarial perturbations on 3D objects. Other notable papers, such as Dream4Drive and AutoScape, have also made significant contributions to the field of synthetic data generation for autonomous driving.

Sources

DriveGen3D: Boosting Feed-Forward Driving Scene Generation with Efficient Video Diffusion

UNDREAM: Bridging Differentiable Rendering and Photorealistic Simulation for End-to-end Adversarial Attacks

Rethinking Driving World Model as Synthetic Data Generator for Perception Tasks

Advances in 4D Representation: Geometry, Motion, and Interaction

AegisRF: Adversarial Perturbations Guided with Sensitivity for Protecting Intellectual Property of Neural Radiance Fields

VGD: Visual Geometry Gaussian Splatting for Feed-Forward Surround-view Driving Reconstruction

Synthetic Data for Robust Runway Detection

AutoScape: Geometry-Consistent Long-Horizon Scene Generation

Built with on top of