Advances in 3D Perception and Autonomous Driving

The field of 3D perception and autonomous driving is rapidly advancing, with a focus on improving the accuracy and efficiency of 3D object detection, tracking, and scene understanding. Recent developments have seen the integration of multi-modal features, such as camera and LiDAR data, to enhance the robustness and reliability of perception systems. Additionally, there is a growing interest in leveraging foundation models and attention mechanisms to improve the performance of 3D perception tasks. Noteworthy papers in this area include: Bridging Perspectives: Foundation Model Guided BEV Maps for 3D Object Detection and Tracking, which proposes a hybrid detection and tracking framework that incorporates both perspective-view and bird's-eye-view features. NV3D: Leveraging Spatial Shape Through Normal Vector-based 3D Object Detection, which utilizes local features acquired from voxel neighbors to determine the relationship between the surface and target entities. XD-RCDepth: Lightweight Radar-Camera Depth Estimation with Explainability-Aligned and Distribution-Aware Distillation, which presents a lightweight architecture that reduces parameters while maintaining comparable accuracy.

Sources

Multi Camera Connected Vision System with Multi View Analytics: A Comprehensive Survey

Cell Instance Segmentation: The Devil Is in the Boundaries

Bridging Perspectives: Foundation Model Guided BEV Maps for 3D Object Detection and Tracking

DAGLFNet:Deep Attention-Guided Global-Local Feature Fusion for Pseudo-Image Point Cloud Segmentation

rareboost3d: a synthetic lidar dataset with enhanced rare classes

Towards Fast and Scalable Normal Integration using Continuous Components

NV3D: Leveraging Spatial Shape Through Normal Vector-based 3D Object Detection

CurriFlow: Curriculum-Guided Depth Fusion with Optical Flow-Based Temporal Alignment for 3D Semantic Scene Completion

Complementary Information Guided Occupancy Prediction via Multi-Level Representation Fusion

Novel Class Discovery for Point Cloud Segmentation via Joint Learning of Causal Representation and Reasoning

XD-RCDepth: Lightweight Radar-Camera Depth Estimation with Explainability-Aligned and Distribution-Aware Distillation

CALM-Net: Curvature-Aware LiDAR Point Cloud-based Multi-Branch Neural Network for Vehicle Re-Identification

Built with on top of