Advancements in Humanoid Robotics and Reinforcement Learning

The field of humanoid robotics and reinforcement learning is making significant progress, with a focus on developing more efficient and effective methods for learning complex tasks. One of the key directions is the integration of unsupervised learning and reinforcement learning, allowing for more flexible and adaptable robotic systems. Another area of research is the development of novel algorithms for policy optimization, which combine the strengths of evolutionary computation and policy gradient methods. Furthermore, Researchers are also exploring new approaches to bridge the gap between human and robot embodiment, enabling robots to learn from human demonstrations and adapt to new tasks. Additionally, there is a growing interest in designing and optimizing robotic hands and manipulators, with a focus on compactness, affordability, and dexterity. Noteworthy papers in this area include: Zero-Shot Whole-Body Humanoid Control via Behavioral Foundation Models, which introduces a novel algorithm for pre-training agents that can solve a wide range of downstream tasks. Next-Future: Sample-Efficient Policy Learning for Robotic-Arm Tasks, which proposes a new replay strategy that enhances sample efficiency and accuracy in learning multi-goal Markov decision processes. Crossing the Human-Robot Embodiment Gap with Sim-to-Real RL using One Human Demonstration, which presents a framework for training dexterous manipulation policies using only one RGB-D video of a human demonstrating a task.

Sources

Zero-Shot Whole-Body Humanoid Control via Behavioral Foundation Models

Next-Future: Sample-Efficient Policy Learning for Robotic-Arm Tasks

Evolutionary Policy Optimization

Crossing the Human-Robot Embodiment Gap with Sim-to-Real RL using One Human Demonstration

B*: Efficient and Optimal Base Placement for Fixed-Base Manipulators

RUKA: Rethinking the Design of Humanoid Hands with Learning

Built with on top of