Advances in Autonomous Robotics and Reinforcement Learning

The field of autonomous robotics is rapidly advancing, with a focus on developing more efficient and adaptive systems. Recent research has highlighted the importance of reinforcement learning and motion planning in achieving complex tasks. One of the key challenges in this area is the need for robust and flexible control systems that can handle dynamic environments and uncertain conditions. To address this challenge, researchers have been exploring new approaches to reinforcement learning, such as diffusion-based methods and non-differentiable reward optimization. These approaches have shown promising results in simulation and real-world experiments, demonstrating improved performance and adaptability in complex scenarios. Notable papers in this area include SPLASH, which introduces a sample-efficient preference-based inverse reinforcement learning method for long-horizon adversarial tasks, and REACT, which proposes a real-time entanglement-aware coverage path planning framework for tethered underwater vehicles. Overall, the field is moving towards more advanced and autonomous systems, with a focus on developing robust and flexible control systems that can handle complex tasks and dynamic environments.

Sources

SPLASH! Sample-efficient Preference-based inverse reinforcement learning for Long-horizon Adversarial tasks from Suboptimal Hierarchical demonstrations

Computing optimal trajectories for a tethered pursuer

Real-Time Adaptive Motion Planning via Point Cloud-Guided, Energy-Based Diffusion and Potential Fields

Consistency Trajectory Planning: High-Quality and Efficient Trajectory Optimization for Offline Model-Based Reinforcement Learning

Customize Harmonic Potential Fields via Hybrid Optimization over Homotopic Paths

Ariel Explores: Vision-based underwater exploration and inspection via generalist drone-level autonomy

Should We Ever Prefer Decision Transformer for Offline Reinforcement Learning?

REACT: Real-time Entanglement-Aware Coverage Path Planning for Tethered Underwater Vehicles

Uncertainty Aware Mapping for Vision-Based Underwater Robots

ILCL: Inverse Logic-Constraint Learning from Temporally Constrained Demonstrations

Ocean Diviner: A Diffusion-Augmented Reinforcement Learning for AUV Robust Control in the Underwater Tasks

A Fast Method for Planning All Optimal Homotopic Configurations for Tethered Robots and Its Extended Applications

NemeSys: An Online Underwater Explorer with Goal-Driven Adaptive Autonomy

Non-differentiable Reward Optimization for Diffusion-based Autonomous Motion Planning

ZipMPC: Compressed Context-Dependent MPC Cost via Imitation Learning

Signal Temporal Logic Compliant Co-design of Planning and Control

Built with on top of