Advances in Robot Learning and Control

The field of robotics is witnessing significant advancements in learning and control, with a focus on developing innovative methods for robot skill acquisition and task execution. Recent research has explored the use of imitation learning, reinforcement learning, and simulation-based training to improve robot performance in complex tasks such as assembly, surgery, and manipulation. Notably, the integration of planning and learning has emerged as a promising approach for achieving generalizable and efficient robot control. Furthermore, advances in scene graph-based video synthesis and diffusion-based models have enabled fine-grained control and precise synthesis of complex scenes, with applications in fields like surgical simulation and robotics.

Some noteworthy papers in this area include: STAR, which introduces a framework for learning diverse robot skill abstractions through rotation-augmented vector quantization. SLAC, which presents a method for simulation-pretrained latent action space learning, enabling real-world reinforcement learning for complex robotic embodiments. Fabrica, which demonstrates a dual-arm robotic system for autonomous assembly of general multi-part objects via integrated planning and learning.

Sources

Imitation Learning-Based Path Generation for the Complex Assembly of Deformable Objects

SG2VID: Scene Graphs Enable Fine-Grained Control for Video Synthesis

STAR: Learning Diverse Robot Skill Abstractions through Rotation-Augmented Vector Quantization

SLAC: Simulation-Pretrained Latent Action Space for Whole-Body Real-World RL

Learning dissection trajectories from expert surgical videos via imitation learning with equivariant diffusion

Fabrica: Dual-Arm Assembly of General Multi-Part Objects via Integrated Planning and Learning

A Smooth Sea Never Made a Skilled $\texttt{SAILOR}$: Robust Imitation via Learning to Search

Built with on top of