Advancements in Computer Vision for Autonomous Systems and Urban Environments

The field of computer vision is rapidly advancing, with a focus on developing robust and accurate models for autonomous systems and urban environments. Recent research has emphasized the importance of creating comprehensive datasets that capture the complexity of real-world scenarios, such as dense and dynamic urban environments, and the integration of multimodal data sources, including cameras, LiDAR, and radar. Notable papers in this area include: The ODOR dataset, which provides a large-scale collection of object annotations for artworks and challenges researchers to explore the intersection of object recognition and smell perception. The RoundaboutHD dataset, which offers a comprehensive benchmark for multi-camera vehicle tracking in real-world urban environments. The EGC-VMAP framework, which generates accurate city-scale vectorized maps through crowdsourced vehicle data. The TruckV2X dataset, which addresses the unique perception challenges of autonomous trucking and provides a foundation for developing cooperative perception systems.

Sources

Smelly, dense, and spreaded: The Object Detection for Olfactory References (ODOR) dataset

Unreal is all you need: Multimodal ISAC Data Simulation with Only One Engine

RoundaboutHD: High-Resolution Real-World Urban Environment Benchmark for Multi-Camera Vehicle Tracking

End-to-End Generation of City-Scale Vectorized Maps by Crowdsourced Vehicles

Multimodal HD Mapping for Intersections by Intelligent Roadside Units

TruckV2X: A Truck-Centered Perception Dataset

OD-VIRAT: A Large-Scale Benchmark for Object Detection in Realistic Surveillance Environments

Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis

Built with on top of