Advances in Robotics and Reinforcement Learning

The field of robotics is moving towards increased autonomy and adaptability, with a focus on developing innovative methods for morphology optimization, contact-safe manipulation, and collision avoidance. Recent research has demonstrated the effectiveness of reinforcement learning (RL) in recovering known optima and solving complex problems without analytical solutions. The integration of RL with other techniques, such as movement primitives and energy-aware control, has shown promise in achieving reliable and safe task-space trajectories. Furthermore, the development of novel algorithms and frameworks, such as certified RL and artificial potential field algorithms, has improved the efficiency and safety of robotic systems in constrained environments. Notable papers include:

  • Task-Aware Morphology Optimization of Planar Manipulators via Reinforcement Learning, which explores the use of RL for morphology optimization in planar robotic manipulators.
  • Contact-Safe Reinforcement Learning with ProMP Reparameterization and Energy Awareness, which proposes a task-space, energy-safe framework for contact-rich manipulation tasks.
  • FACA: Fair and Agile Multi-Robot Collision Avoidance in Constrained Environments with Dynamic Priorities, which introduces a fair and agile collision avoidance approach for multi-robot systems.
  • Safe and Optimal Variable Impedance Control via Certified Reinforcement Learning, which introduces a novel trajectory-centric RL framework for learning combined DMP and VIC policies while guaranteeing Lyapunov stability and actuator feasibility.

Sources

Task-Aware Morphology Optimization of Planar Manipulators via Reinforcement Learning

Contact-Safe Reinforcement Learning with ProMP Reparameterization and Energy Awareness

FACA: Fair and Agile Multi-Robot Collision Avoidance in Constrained Environments with Dynamic Priorities

Safe and Optimal Variable Impedance Control via Certified Reinforcement Learning

Built with on top of