Reinforcement Learning Under Uncertainty

The field of reinforcement learning is moving towards addressing the challenges of uncertainty and robustness in dynamic environments. Recent developments focus on improving the scalability and efficiency of model-based reinforcement learning, as well as enhancing the generalizability of reinforcement learning agents. Notable advancements include the integration of physics-informed models, uncertainty-aware dynamics models, and optimistic exploration strategies. These innovations have shown promising results in various applications, including control tasks and open-pit mining optimization. Some noteworthy papers include: SOMBRL, which proposes a scalable and optimistic model-based reinforcement learning approach with sublinear regret for nonlinear dynamics. Deep Gaussian Process Proximal Policy Optimization, which introduces a scalable, model-free actor-critic algorithm that leverages Deep Gaussian Processes to approximate both the policy and value function, providing well-calibrated uncertainty estimates.

Sources

Deep Gaussian Process Proximal Policy Optimization

Dreaming Falcon: Physics-Informed Model-Based Reinforcement Learning for Quadcopters

Deep Learning Decision Support System for Open-Pit Mining Optimisation: GPU-Accelerated Planning Under Geological Uncertainty

SOMBRL: Scalable and Optimistic Model-Based RL

Leveraging weights signals - Predicting and improving generalizability in reinforcement learning

Built with on top of