Reinforcement Learning Advances in Dynamic Systems and Control

The field of reinforcement learning (RL) is rapidly advancing, with a focus on improving the performance and robustness of dynamic systems and control. Recent developments have seen the integration of RL with other technologies, such as digital twins and diffusion models, to create more sophisticated and adaptive control systems. These advances have the potential to revolutionize a wide range of applications, from robotics and autonomous systems to complex infrastructure management. Notable papers in this area include: The paper on Digital Twin-enabled Multi-generation Control Co-Design with Deep Reinforcement Learning, which presents a framework for integrating digital twins with RL to improve the performance and robustness of dynamic systems. The paper on Emergence of hybrid computational dynamics through reinforcement learning, which demonstrates how RL can be used to discover complex computational dynamics in neural networks. The paper on Offline Reinforcement Learning with Generative Trajectory Policies, which introduces a new paradigm for offline RL that uses generative models to learn continuous-time trajectories. The paper on Rethinking the Role of Dynamic Sparse Training for Scalable Deep Reinforcement Learning, which investigates the use of dynamic sparse training for improving the scalability of deep RL. The paper on Diffusion Models for Reinforcement Learning: Foundations, Taxonomy, and Development, which provides a comprehensive survey of diffusion-based RL methods. The paper on Enhancing Sampling-based Planning with a Library of Paths, which presents a method for improving the efficiency of sampling-based planners by reusing previous experiences. The paper on Escaping Local Optima in the Waddington Landscape: A Multi-Stage TRPO-PPO Approach for Single-Cell Perturbation Analysis, which introduces a multi-stage RL algorithm for modeling cellular responses to genetic and chemical perturbations. The paper on A New Perspective on Transformers in Online Reinforcement Learning for Continuous Control, which investigates the use of transformers in online model-free RL. The paper on Simpicial Embeddings Improve Sample Efficiency in Actor-Critic Agents, which proposes the use of simplicial embeddings to improve the sample efficiency of actor-critic agents. The paper on A Diffusion-Refined Planner with Reinforcement Learning Priors for Confined-Space Parking, which presents a method for improving the planning performance in confined-space parking environments using a diffusion-refined planner with RL priors.

Sources

Digital Twin-enabled Multi-generation Control Co-Design with Deep Reinforcement Learning

Emergence of hybrid computational dynamics through reinforcement learning

Gym-TORAX: Open-source software for integrating RL with plasma control simulators

Offline Reinforcement Learning with Generative Trajectory Policies

Rethinking the Role of Dynamic Sparse Training for Scalable Deep Reinforcement Learning

Diffusion Models for Reinforcement Learning: Foundations, Taxonomy, and Development

Enhancing Sampling-based Planning with a Library of Paths

Escaping Local Optima in the Waddington Landscape: A Multi-Stage TRPO-PPO Approach for Single-Cell Perturbation Analysis

A New Perspective on Transformers in Online Reinforcement Learning for Continuous Control

Simplicial Embeddings Improve Sample Efficiency in Actor-Critic Agents

A Diffusion-Refined Planner with Reinforcement Learning Priors for Confined-Space Parking

Built with on top of