Robust Control and Reinforcement Learning

The field of control and reinforcement learning is moving towards developing more robust and risk-aware methods. Researchers are exploring new approaches to handle uncertainty and adversarial disturbances in complex systems, including the use of cross-entropy and adversarial entropy regularization. Additionally, there is a growing interest in model-based reinforcement learning, with a focus on adapting world models to improve robustness and performance.

Noteworthy papers in this area include:

  • Beyond KL-divergence: Risk Aware Control Through Cross Entropy and Adversarial Entropy Regularization, which introduces a flexible framework for constructing control policies robust to adversarial disturbance distributions.
  • Policy-Driven World Model Adaptation for Robust Offline Model-based Reinforcement Learning, which proposes a unified learning objective for adapting world models alongside policies to improve robustness.
  • Risk-Averse Reinforcement Learning with Itakura-Saito Loss, which introduces a numerically stable loss function for risk-averse reinforcement learning based on the Itakura-Saito divergence.

Sources

Beyond KL-divergence: Risk Aware Control Through Cross Entropy and Adversarial Entropy Regularization

Formal Uncertainty Propagation for Stochastic Dynamical Systems with Additive Noise

Policy-Driven World Model Adaptation for Robust Offline Model-based Reinforcement Learning

Strong convergence in the infinite horizon of numerical methods for stochastic delay differential equations

A Temporal Difference Method for Stochastic Continuous Dynamics

Risk-Averse Reinforcement Learning with Itakura-Saito Loss

Built with on top of