The field of computer vision is rapidly advancing, with significant developments in visual localization and mapping. Researchers are exploring innovative approaches to improve the accuracy and robustness of these systems, enabling their deployment in a wide range of applications, from autonomous vehicles to spacecraft navigation. Notably, the use of large vision models and graph-based techniques is becoming increasingly popular, allowing for more efficient and effective feature extraction and matching. Additionally, the development of new datasets and benchmarks is facilitating the evaluation and comparison of different methods, driving progress in the field. Noteworthy papers include PanMatch, which presents a versatile foundation model for robust correspondence matching, and VISTA, a monocular segmentation-based mapping framework for appearance and view-invariant global localization. GeoDistill is also noteworthy for its geometry-guided weakly supervised self-distillation framework for cross-view localization.
Progress in Visual Localization and Mapping
Sources
360-Degree Full-view Image Segmentation by Spherical Convolution compatible with Large-scale Planar Pre-trained Models
FPC-Net: Revisiting SuperPoint with Descriptor-Free Keypoint Detection via Feature Pyramids and Consistency-Based Implicit Matching
A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers
Comparison of Localization Algorithms between Reduced-Scale and Real-Sized Vehicles Using Visual and Inertial Sensors
CorrMoE: Mixture of Experts with De-stylization Learning for Cross-Scene and Cross-Domain Correspondence Pruning