The field of robotics and artificial intelligence is moving towards more dynamic and adaptive interaction scenarios. Researchers are exploring new methods to enable robots to interact with their environment in a more human-like and efficient way. One of the key areas of focus is the development of control policies that can adapt to different morphologies and environments. Recent work has demonstrated the ability to learn latent representations and control policies for musculoskeletal characters without motion data, enabling energy-aware and morphology-adaptive locomotion. Other research has focused on designing human-like RL agents through trajectory optimization with action quantization, resulting in more natural and interpretable behavior. Additionally, there have been advancements in the discovery of optimal natural gaits for dissipative systems, such as legged robots, through the use of virtual energy injection and continuation approaches. Noteworthy papers include:
- Humanoid Whole-Body Badminton via Multi-Stage Reinforcement Learning, which presents a unified whole-body controller for humanoid badminton.
- FreeMusco: Motion-Free Learning of Latent Control for Morphology-Adaptive Locomotion in Musculoskeletal Characters, which enables energy-aware and morphology-adaptive locomotion without motion data.
- Learning Human-Like RL Agents Through Trajectory Optimization With Action Quantization, which achieves human-like behavior in RL agents through macro action quantization.