The field of robotic manipulation and learning is rapidly advancing, with a focus on developing more autonomous, adaptable, and versatile systems. Recent research has explored various approaches to improve robotic manipulation, including imitation learning, reinforcement learning, and hierarchical reinforcement learning. Noteworthy papers in this area include LaGarNet, which presents a novel goal-conditioned recurrent state-space model for pick-and-place garment flattening, and LodeStar, which proposes a learning framework for long-horizon dexterous manipulation tasks. Additionally, papers like HERMES and HITTER demonstrate the potential of human-to-robot learning and hierarchical planning for mobile dexterous manipulation and table tennis playing. These advancements have significant implications for various applications, including robotics, healthcare, and manufacturing.
Notable papers include: LaGarNet, which achieves state-of-the-art performance in pick-and-place garment flattening. LodeStar, which significantly improves task performance and robustness in long-horizon dexterous manipulation tasks. HERMES, which enables mobile bimanual dexterous manipulation with generalizable behaviors across diverse scenarios. HITTER, which achieves real-world humanoid table tennis with sub-second reactive control.