Advances in Humanoid Robotics and Control

The field of humanoid robotics is witnessing significant advancements, with a focus on developing more robust, versatile, and autonomous systems. Recent research has explored innovative control methods, such as Latent Space Backward Planning and Coupled Hierarchical Diffusion, to improve the efficiency and accuracy of robotic planning. Moreover, the integration of reinforcement learning and imitation learning has led to more effective policy fine-tuning, as seen in approaches like IN-RIL. Noteworthy papers in this area include the presentation of Zippy, the smallest self-contained bipedal walking robot, and the introduction of FALCON, a dual-agent reinforcement-learning-based framework for robust force-adaptive humanoid loco-manipulation. These developments are paving the way for more sophisticated and capable humanoid robots that can perform complex tasks in various environments.

Sources

Zippy: The smallest power-autonomous bipedal robot

Demystifying Diffusion Policies: Action Memorization and Simple Lookup Table Alternatives

Let Humanoids Hike! Integrative Skill Development on Complex Trails

JAEGER: Dual-Level Humanoid Whole-Body Controller

FALCON: Learning Force-Adaptive Humanoid Loco-Manipulation

Efficient Robotic Policy Learning via Latent Space Backward Planning

FACET: Force-Adaptive Control via Impedance Reference Tracking for Legged Robots

CHD: Coupled Hierarchical Diffusion for Long-Horizon Tasks

HuB: Learning Extreme Humanoid Balance

Motion Control of High-Dimensional Musculoskeletal Systems with Hierarchical Model-Based Planning

RT-cache: Efficient Robot Trajectory Retrieval System

Latent Theory of Mind: A Decentralized Diffusion Architecture for Cooperative Manipulation

Train a Multi-Task Diffusion Policy on RLBench-18 in One Day with One GPU

Learning Long-Context Diffusion Policies via Past-Token Prediction

IN-RIL: Interleaved Reinforcement and Imitation Learning for Policy Fine-Tuning

Built with on top of