The field of event-based vision is rapidly advancing, with a focus on improving the efficiency and accuracy of visual place recognition, facial keypoint alignment, and object detection. Researchers are exploring new methods to leverage the advantages of event cameras, such as high temporal resolution and robustness to varying illumination. Notable developments include the use of sub-millisecond slices of event data for visual place recognition, cross-modal fusion attention for facial keypoint alignment, and predictive representations of events for downstream tasks. These innovations have the potential to enable accurate and efficient navigation, tracking, and detection in various applications, including autonomous vehicles and drones. Noteworthy papers include: Prepare for Warp Speed, which demonstrates sub-millisecond visual place recognition using event cameras. Event-based Facial Keypoint Alignment via Cross-Modal Fusion Attention and Self-Supervised Multi-Event Representation Learning, which proposes a novel framework for event-based facial keypoint alignment. Fast Feature Field, which develops a mathematical argument and algorithms for building representations of data from event-based cameras. Enabling High-Frequency Cross-Modality Visual Positioning Service for Accurate Drone Landing, which redesigns drone-oriented visual positioning service with the event camera for accurate drone landing. Adaptive Event Stream Slicing for Open-Vocabulary Event-Based Object Detection via Vision-Language Knowledge Distillation, which proposes an event-image knowledge distillation framework for open-vocabulary object detection on event data.