Dexterous Manipulation and Robot Learning

The field of robotic manipulation is moving towards more dexterous and adaptive control policies, leveraging large-scale demonstration data and innovative learning frameworks. Recent developments focus on bridging the gap between human demonstrations and robot capabilities, enabling robots to learn from imperfect data and generalize to new tasks and environments. Notable advancements include the use of reinforcement learning, diffusion models, and modular software frameworks to improve policy learning, safety, and sim-to-real transfer.

Noteworthy papers include: Dexplore, which introduces a unified single-loop optimization for learning robot control policies from hand-object motion-capture data. MimicDroid, which enables humanoid robots to perform in-context learning from human play videos, achieving nearly twofold higher success rates in the real world. DreamControl, which leverages diffusion models and reinforcement learning to learn autonomous whole-body humanoid skills, promoting natural-looking motions and aiding in sim-to-real transfer.

Sources

Dexplore: Scalable Neural Control for Dexterous Manipulation from Reference-Scoped Exploration

MimicDroid: In-Context Learning for Humanoid Robot Manipulation from Human Play Videos

Self-Augmented Robot Trajectory: Efficient Imitation Learning via Safe Self-augmentation with Demonstrator-annotated Precision

Force-Modulated Visual Policy for Robot-Assisted Dressing with Arm Motions

LeVR: A Modular VR Teleoperation Framework for Imitation Learning in Dexterous Manipulation

DreamControl: Human-Inspired Whole-Body Humanoid Control for Scene Interaction via Guided Diffusion

Built with on top of