Advances in Reinforcement Learning and Adaptive Control

The field of reinforcement learning and adaptive control is rapidly advancing, with a focus on developing more robust and efficient algorithms for real-world applications. Recent research has highlighted the importance of addressing the distribution shift problem in transportation networks, with approaches such as meta reinforcement learning and domain randomization showing promise. Another key area of research is the development of more efficient and data-driven methods for adaptive control, such as symbolic dynamics with residual learning. Noteworthy papers in this area include Sym2Real, which demonstrates robust control of quadrotors and racecars using a fully data-driven framework, and SPiDR, which presents a scalable algorithm for safe sim-to-real transfer with provable guarantees. Additionally, researchers are exploring new approaches to reinforcement learning, such as frictional Q-learning and pure exploration via Frank-Wolfe self-play, which show potential for improving performance and efficiency in complex tasks.

Sources

The Distribution Shift Problem in Transportation Networks using Reinforcement Learning and AI

Sym2Real: Symbolic Dynamics with Residual Learning for Data-Efficient Adaptive Control

Nonconvex Regularization for Feature Selection in Reinforcement Learning

Uncertainty-Based Smooth Policy Regularisation for Reinforcement Learning with Few Demonstrations

SPiDR: A Simple Approach for Zero-Shot Safety in Sim-to-Real Transfer

Central Limit Theorems for Asynchronous Averaged Q-Learning

Asymptotically Optimal Problem-Dependent Bandit Policies for Transfer Learning

Evaluation-Aware Reinforcement Learning

Efficient $\varepsilon$-approximate minimum-entropy couplings

Frictional Q-Learning

Pure Exploration via Frank-Wolfe Self-Play

Built with on top of