Advancements in Robotic Manipulation and Learning

The field of robotic manipulation and learning is rapidly advancing, with a focus on developing more autonomous, adaptable, and versatile systems. Recent research has explored various approaches to improve robotic manipulation, including imitation learning, reinforcement learning, and hierarchical reinforcement learning. Noteworthy papers in this area include LaGarNet, which presents a novel goal-conditioned recurrent state-space model for pick-and-place garment flattening, and LodeStar, which proposes a learning framework for long-horizon dexterous manipulation tasks. Additionally, papers like HERMES and HITTER demonstrate the potential of human-to-robot learning and hierarchical planning for mobile dexterous manipulation and table tennis playing. These advancements have significant implications for various applications, including robotics, healthcare, and manufacturing.

Notable papers include: LaGarNet, which achieves state-of-the-art performance in pick-and-place garment flattening. LodeStar, which significantly improves task performance and robustness in long-horizon dexterous manipulation tasks. HERMES, which enables mobile bimanual dexterous manipulation with generalizable behaviors across diverse scenarios. HITTER, which achieves real-world humanoid table tennis with sub-second reactive control.

Sources

Increasing Interaction Fidelity: Training Routines for Biomechanical Models in HCI

A Dataset and Benchmark for Robotic Cloth Unfolding Grasp Selection: The ICRA 2024 Cloth Competition

LaGarNet: Goal-Conditioned Recurrent State-Space Models for Pick-and-Place Garment Flattening

Robotic Manipulation via Imitation Learning: Taxonomy, Evolution, Benchmark, and Challenges

Proximal Supervised Fine-Tuning

Multi-layer Abstraction for Nested Generation of Options (MANGO) in Hierarchical Reinforcement Learning

Effect of Performance Feedback Timing on Motor Learning for a Surgical Training Task

LodeStar: Long-horizon Dexterity via Synthetic Data Augmentation from Human Demonstrations

Modeling and Control Framework for Autonomous Space Manipulator Handover Operations

Arnold: a generalist muscle transformer policy

Fuzzy-Based Control Method for Autonomous Spacecraft Inspection with Minimal Fuel Consumption

Deep Sensorimotor Control by Imitating Predictive Models of Human Motion

Quantitative Outcome-Oriented Assessment of Microsurgical Anastomosis

QuadKAN: KAN-Enhanced Quadruped Motion Control via End-to-End Reinforcement Learning

Real-time Testing of Satellite Attitude Control With a Reaction Wheel Hardware-In-the-Loop Platform

From Tabula Rasa to Emergent Abilities: Discovering Robot Skills via Real-World Unsupervised Quality-Diversity

AutoRing: Imitation Learning--based Autonomous Intraocular Foreign Body Removal Manipulation with Eye Surgical Robot

Gentle Object Retraction in Dense Clutter Using Multimodal Force Sensing and Imitation Learning

Impedance Primitive-augmented Hierarchical Reinforcement Learning for Sequential Tasks

Divide, Discover, Deploy: Factorized Skill Learning with Symmetry and Style Priors

HERMES: Human-to-Robot Embodied Learning from Multi-Source Motion Data for Mobile Dexterous Manipulation

Non-expert to Expert Motion Translation Using Generative Adversarial Networks

HITTER: A HumanoId Table TEnnis Robot via Hierarchical Planning and Learning

Built with on top of