The field of reinforcement learning and control is moving towards developing more robust and safe methods for real-world applications. Recent research has focused on addressing the challenges of uncertainty, exploration, and safety in complex systems. One of the key directions is the development of distributionally robust reinforcement learning methods, which can handle uncertainties in transition dynamics and provide guarantees on performance. Another important area is the development of safe reinforcement learning methods, which can ensure that the system stays within safe bounds and avoids undesirable outcomes. The use of conformal prediction, probabilistic safety guarantees, and robust policy synthesis are some of the innovative approaches being explored. Noteworthy papers in this area include: Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction, which proposes a novel algorithm for online learning in robust Markov decision processes. Provably Efficient Sample Complexity for Robust CMDP, which establishes a sample complexity guarantee for robust constrained Markov decision processes. Statistically Assuring Safety of Control Systems using Ensembles of Safety Filters and Conformal Prediction, which introduces a conformal prediction-based framework for providing probabilistic safety guarantees.
Robustness and Safety in Reinforcement Learning and Control
Sources
Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction
Statistically Assuring Safety of Control Systems using Ensembles of Safety Filters and Conformal Prediction
Safe and Optimal Learning from Preferences via Weighted Temporal Logic with Applications in Robotics and Formula 1