The field of autonomous systems is witnessing significant advancements in sensor fusion and estimation techniques, enabling more accurate and robust perception capabilities. Researchers are exploring novel approaches to integrate multiple sensors, such as radar, lidar, and inertial measurement units, to improve the accuracy and reliability of ego-motion estimation, object detection, and tracking. Notably, the development of deep learning-based methods is playing a crucial role in enhancing the performance of these systems. Furthermore, the use of contrastive learning and multi-modal embeddings is showing promise in improving the understanding of complex scenes and activities. Overall, these advancements are paving the way for more sophisticated and autonomous systems. Noteworthy papers include: O2Former, which proposes a novel instance segmentation framework for SAR ship images, achieving state-of-the-art performance. DeSPITE, which explores learning the correspondence between LiDAR point clouds, human skeleton poses, IMU data, and text, enabling novel human activity understanding tasks. RaCalNet, which eliminates the need for dense supervision in metric depth estimation using millimeter-wave radar, achieving superior performance with sparse supervision.
Advancements in Sensor Fusion and Estimation for Autonomous Systems
Sources
Design and Simulation of Vehicle Motion Tracking System using a Youla Controller Output Observation System
DeSPITE: Exploring Contrastive Deep Skeleton-Pointcloud-IMU-Text Embeddings for Advanced Point Cloud Human Activity Understanding