Advances in Human-Robot Interaction and Autonomous Navigation

The field of robotics is rapidly advancing, with a focus on improving human-robot interaction and autonomous navigation. Recent developments have led to the creation of more sophisticated and adaptable robots, capable of learning from their environment and interacting with humans in a more natural way. One of the key areas of research is in the development of frameworks for classifying robot morphology, allowing for more precise and nuanced understanding of how robot design affects human-robot interaction. Additionally, there have been significant advancements in autonomous navigation, including the development of hybrid approaches that combine model-based planning with deep learning techniques. These advancements have the potential to enable robots to navigate complex environments with greater ease and accuracy. Noteworthy papers in this area include MetaMorph, which presents a comprehensive framework for classifying robot morphology, and PixelNav, which proposes a novel hybrid approach for vision-only navigation. Other notable papers include GEAR, which introduces a gaze-enabled system for human-robot collaboration, and Humanoid Occupancy, which presents a generalized multimodal occupancy perception system for humanoid robots.

Sources

MetaMorph -- A Metamodelling Approach For Robot Morphology

GEAR: Gaze-Enabled Human-Robot Collaborative Assembly

Bot App\'etit! Exploring how Robot Morphology Shapes Perceived Affordances via a Mise en Place Scenario in a VR Kitchen

Humanoid Occupancy: Enabling A Generalized Multimodal Occupancy Perception System on Humanoid Robots

PixelNav: Towards Model-based Vision-Only Navigation with Topological Graphs

Autonomous Exploration with Terrestrial-Aerial Bimodal Vehicles

Decision Transformer-Based Drone Trajectory Planning with Dynamic Safety-Efficiency Trade-Offs

LITE: A Learning-Integrated Topological Explorer for Multi-Floor Indoor Environments

Model Predictive Adversarial Imitation Learning for Planning from Observation

From Seeing to Experiencing: Scaling Navigation Foundation Models with Reinforcement Learning

A Two-Stage Lightweight Framework for Efficient Land-Air Bimodal Robot Autonomous Navigation

Recognizing Actions from Robotic View for Natural Human-Robot Interaction

Built with on top of