Advancements in Autonomous Driving Perception

The field of autonomous driving is moving towards enhanced perception capabilities, with a focus on cooperative perception, infrastructure-based sensor placement, and improved 3D object detection. Researchers are exploring the use of monocular traffic cameras, heterogeneous multi-modal infrastructure sensors, and self-supervised pre-training methods to improve scene representation and perception range. The development of new datasets, such as those for lane detection, end-to-end autonomous parking, and drone-derived traffic analysis, is also driving innovation in the field. Noteworthy papers include: Enhanced Cooperative Perception Through Asynchronous Vehicle to Infrastructure Framework, which proposes a V2I framework that utilizes monocular traffic cameras to detect 3D objects. InSPE: Rapid Evaluation of Heterogeneous Multi-Modal Infrastructure Sensor Placement, which introduces a perception surrogate metric set to rapidly assess perception effectiveness across diverse infrastructure and environmental scenarios.

Sources

Enhanced Cooperative Perception Through Asynchronous Vehicle to Infrastructure Framework with Delay Mitigation for Connected and Automated Vehicles

InSPE: Rapid Evaluation of Heterogeneous Multi-Modal Infrastructure Sensor Placement

CICV5G: A 5G Communication Delay Dataset for PnC in Cloud-based Intelligent Connected Vehicles

Datasets for Lane Detection in Autonomous Driving: A Comprehensive Review

E2E Parking Dataset: An Open Benchmark for End-to-End Autonomous Parking

DRIFT open dataset: A drone-derived intelligence for traffic analysis in urban environmen

Collaborative Perception Datasets for Autonomous Driving: A Review

Self-Supervised Pre-training with Combined Datasets for 3D Perception in Autonomous Driving

Built with on top of