The field of geospatial perception and autonomous navigation is rapidly advancing, with a focus on improving the accuracy and robustness of localization and mapping systems. Recent developments have centered on the integration of multiple sensor modalities, such as LiDAR, visual, and inertial measurements, to enhance the reliability and adaptability of these systems in complex and dynamic environments. Notable advancements include the development of novel fusion techniques, such as the Inferred Attention Fusion (INAF) module, and the proposal of robust LiDAR-visual-inertial-kinematic odometry systems. Additionally, there has been significant progress in point cloud compression and processing, with the introduction of methods like ProDAT and AnyPcc, which enable efficient and scalable compression of 3D point cloud data.
Noteworthy papers in this area include the proposal of the Pole-Image descriptor, which leverages poles as anchors to generate signatures from the surrounding 3D structure, and the development of the $ abla$-SDF method, which combines an explicit prior obtained from gradient-augmented octree interpolation with an implicit neural residual for Euclidean signed distance function reconstruction. The ALICE-LRI method, which achieves lossless range image generation from spinning LiDAR point clouds without requiring manufacturer metadata or calibration files, is also a significant contribution.