Dexterous Grasping and Hand Manipulation

The field of dexterous grasping and hand manipulation is moving towards more generalizable and adaptable solutions. Researchers are exploring new methods to improve the robustness of vision-based grasping models, including the use of reinforcement learning and simulation-based data augmentation. Additionally, there is a growing interest in developing large-scale datasets that cover various grasp types and object categories, which is essential for training effective models. Another direction is the development of part-based methods for transferring hand-object interactions across different object categories, enabling more efficient learning and generalization. Furthermore, model-based reinforcement learning approaches are being investigated for learning multi-goal dexterous hand manipulation tasks. Notable papers in this area include:

  • Dexonomy, which proposes an efficient pipeline for synthesizing contact-rich grasps for any grasp type, object, and articulated hand, and constructs a large-scale dataset with 10.7k objects and 9.5M grasps.
  • PartHOI, which introduces a novel method for part-based hand-object interaction transfer using generalized cylinder representations, enabling robust geometric correspondence between object parts and cross-category transfer.

Sources

RL-Driven Data Generation for Robust Vision-Based Dexterous Grasping

Dexonomy: Synthesizing All Dexterous Grasp Types in a Grasp Taxonomy

PartHOI: Part-based Hand-Object Interaction Transfer via Generalized Cylinders

Multi-Goal Dexterous Hand Manipulation using Probabilistic Model-based Reinforcement Learning

Built with on top of