Advancements in Robotic Perception and Navigation

The field of robotic perception and navigation is rapidly advancing, with a focus on developing innovative solutions to address complex challenges. Recent developments have centered around improving the accuracy and robustness of visual SLAM systems, enabling robots to navigate and map dynamic environments. Notable progress has also been made in the area of tactile sensing, with the development of whisker-based tactile flight systems for tiny drones. Furthermore, researchers have been exploring the use of ultra-wideband synthetic aperture radar imaging for mobile robot mapping, which has shown promising results in adverse environmental conditions. The integration of deep learning-based methods, such as vision transformers, has also been a key area of research, with applications in visual odometry, place recognition, and object detection. Overall, these advancements are paving the way for more robust, efficient, and adaptable robotic systems. Noteworthy papers include RSV-SLAM, which introduces a real-time semantic RGBD SLAM approach for dynamic environments, and Novel UWB Synthetic Aperture Radar Imaging, which proposes a pipeline for mobile robots to incorporate UWB radar-based SAR imaging for high-resolution environmental mapping.

Sources

RSV-SLAM: Toward Real-Time Semantic Visual SLAM in Indoor Dynamic Environments

Novel UWB Synthetic Aperture Radar Imaging for Mobile Robot Mapping

Whisker-based Tactile Flight for Tiny Drones

Convolutional Neural Nets vs Vision Transformers: A SpaceNet Case Study with Balanced vs Imbalanced Regimes

Visual Odometry with Transformers

Real-Time Threaded Houbara Detection and Segmentation for Wildlife Conservation using Mobile Platforms

Robust Visual Embodiment: How Robots Discover Their Bodies in Real Environments

The Overlooked Value of Test-time Reference Sets in Visual Place Recognition

TCB-VIO: Tightly-Coupled Focal-Plane Binary-Enhanced Visual Inertial Odometry

From Filters to VLMs: Benchmarking Defogging Methods through Object Detection and Segmentation Performance

A Recursive Pyramidal Algorithm for Solving the Image Registration Problem

Flexible and Efficient Spatio-Temporal Transformer for Sequential Visual Place Recognition

OKVIS2-X: Open Keyframe-based Visual-Inertial SLAM Configurable with Dense Depth or LiDAR, and GNSS

Bio-Inspired Robotic Houbara: From Development to Field Deployment for Behavioral Studies

Benchmark on Monocular Metric Depth Estimation in Wildlife Setting

A Comparative Study of Vision Transformers and CNNs for Few-Shot Rigid Transformation and Fundamental Matrix Estimation

CLEAR-IR: Clarity-Enhanced Active Reconstruction of Infrared Imagery

RGBD Gaze Tracking Using Transformer for Feature Fusion

Road Surface Condition Detection with Machine Learning using New York State Department of Transportation Camera Images and Weather Forecast Data

Real-Time Glass Detection and Reprojection using Sensor Fusion Onboard Aerial Robots

Built with on top of