Human-Robot Collaboration Advances

The field of human-robot collaboration is moving towards more adaptive and safe interaction between humans and robots. Recent developments focus on enabling robots to learn from human behavior and adjust their strategies to cooperate effectively. This includes advances in trajectory sampling, social navigation, and task-efficient reinforcement learning. Noteworthy papers in this area include Adap-RPF, which proposes an adaptive trajectory sampling method for robot person following in dynamic environments, and Learning Social Navigation from Positive and Negative Demonstrations and Rule-Based Specifications, which develops a framework for learning a density-based reward from demonstrations and rule-based objectives. Other notable works include A Task-Efficient Reinforcement Learning Task-Motion Planner for Safe Human-Robot Cooperation, Maximal Adaptation, Minimal Guidance: Permissive Reactive Robot Task Planning with Humans in the Loop, and Learning Human-Humanoid Coordination for Collaborative Object Carrying, which all contribute to the development of more efficient and safe human-robot collaboration systems.

Sources

Adap-RPF: Adaptive Trajectory Sampling for Robot Person Following in Dynamic Crowded Environments

Learning Social Navigation from Positive and Negative Demonstrations and Rule-Based Specifications

A Task-Efficient Reinforcement Learning Task-Motion Planner for Safe Human-Robot Cooperation

Maximal Adaptation, Minimal Guidance: Permissive Reactive Robot Task Planning with Humans in the Loop

Learning Human-Humanoid Coordination for Collaborative Object Carrying

Built with on top of