Safe Control and Reinforcement Learning in Dynamical Systems

The field of safe control and reinforcement learning in dynamical systems is rapidly advancing, with a focus on developing innovative methods to ensure safety and stability in complex systems. Recent research has explored the use of control barrier functions (CBFs) to provide mathematical safety guarantees, as well as the integration of CBFs with reinforcement learning (RL) to enable safe exploration and exploitation. Notably, the development of viscosity CBFs has bridged the gap between CBFs and Hamilton-Jacobi reachability frameworks, providing a more comprehensive understanding of safe control. Additionally, the application of Gaussian process implicit surfaces as CBFs has shown promise in safe robot navigation. The use of adaptive action scaling in constraint-aware RL has also demonstrated significant reductions in constraint violations while maintaining task performance. Overall, the field is moving towards the development of more robust and efficient methods for safe control and RL, with potential applications in a wide range of areas, including robotics, autonomous systems, and building energy management. Noteworthy papers include: Viscosity CBFs: Bridging the Control Barrier Function and Hamilton-Jacobi Reachability Frameworks in Safe Control Theory, which introduces viscosity CBFs and their connection to CB-VFs. CBF-RL: Safety Filtering Reinforcement Learning in Training with Control Barrier Functions, which proposes a framework for generating safe behaviors with RL by enforcing CBFs in training.

Sources

Sequential Convex Programming for 6-DoF Powered Descent Guidance with Continuous-Time Compound State-Triggered Constraints

Designing Control Barrier Functions Using a Dynamic Backup Policy

Viscosity CBFs: Bridging the Control Barrier Function and Hamilton-Jacobi Reachability Frameworks in Safe Control Theory

Computing Safe Control Inputs using Discrete-Time Matrix Control Barrier Functions via Convex Optimization

Controller for Incremental Input-to-State Practical Stabilization of Partially Unknown systems with Invariance Guarantees

Robust Closed-Form Control for MIMO Nonlinear Systems under Conflicting Time-Varying Hard and Soft Constraints

Constraint-Aware Reinforcement Learning via Adaptive Action Scaling

Gaussian Process Implicit Surfaces as Control Barrier Functions for Safe Robot Navigation

Non-Gaussian Distribution Steering in Nonlinear Dynamics with Conjugate Unscented Transformation

Belief Space Control of Safety-Critical Systems Under State-Dependent Measurement Noise

STEMS: Spatial-Temporal Enhanced Safe Multi-Agent Coordination for Building Energy Management

Demystifying the Mechanisms Behind Emergent Exploration in Goal-conditioned RL

Further Results on Safety-Critical Stabilization of Force-Controlled Nonholonomic Mobile Robots

CBF-RL: Safety Filtering Reinforcement Learning in Training with Control Barrier Functions

Built with on top of