Advances in Robot Learning and Control

The field of robot learning and control is rapidly advancing, with a focus on developing more efficient and adaptive methods for robotic systems to acquire complex skills. A key direction in this area is the use of large-scale video data to learn semantic action flows, which can be used to improve robot manipulation skills. Additionally, researchers are exploring the use of modal-level exploration and data selection to enable self-improvement in robotic systems.Another area of focus is the development of more effective path planning algorithms for modular self-reconfigurable satellites, which can be used to improve the efficiency and flexibility of satellite systems. The use of latent space inference and variational replanning frameworks is also being explored to improve the adaptability and efficiency of robotic systems. Furthermore, researchers are investigating the use of discrete-time Gaussian process mixtures for flexible policy representation and imitation learning in robot manipulation, which has shown promising results in terms of scalability and performance. Noteworthy papers in this area include ViSA-Flow, which achieves state-of-the-art performance in low-data regimes, and Latent Adaptive Planner, which enables robots to perform complex interactions with human-like adaptability. The Unreasonable Effectiveness of Discrete-Time Gaussian Process Mixtures for Robot Policy Learning also presents a novel approach for flexible policy representation and imitation learning, achieving state-of-the-art performance on diverse few-shot manipulation benchmarks.

Sources

ViSA-Flow: Accelerating Robot Skill Learning via Large-Scale Video Semantic Action Flow

SIME: Enhancing Policy Self-Improvement with Modal-level Exploration

A Goal-Oriented Reinforcement Learning-Based Path Planning Algorithm for Modular Self-Reconfigurable Satellites

Latent Adaptive Planner for Dynamic Manipulation

The Unreasonable Effectiveness of Discrete-Time Gaussian Process Mixtures for Robot Policy Learning

Visual Imitation Enables Contextual Humanoid Control

Hierarchical Task Decomposition for Execution Monitoring and Error Recovery: Understanding the Rationale Behind Task Demonstrations

Replay to Remember (R2R): An Efficient Uncertainty-driven Unsupervised Continual Learning Framework Using Generative Replay

CLAM: Continuous Latent Action Models for Robot Learning from Unlabeled Demonstrations

Built with on top of