Advancements in Distributed Learning and Control

The field of distributed learning and control is witnessing significant advancements, with a focus on developing innovative methods for efficient routing, congestion mitigation, and cooperative decision-making. Researchers are exploring the potential of Q-learning, event-triggered control, and reinforcement learning to address complex challenges in IoT sensor networks, mixed-autonomy traffic systems, and cyber-physical systems. Notably, the integration of distributed Q-learning and model predictive control is showing promise in improving the speed of convergence and scalability of learning algorithms. Furthermore, the application of reinforcement learning to path planning in dynamic environments is yielding impressive results, with decentralized frameworks and hierarchical decomposition emerging as effective approaches. Some noteworthy papers in this area include:

  • Distributed Q-learning-based Shortest-Path Tree Construction in IoT Sensor Networks, which proposes a novel distributed Q-learning framework for constructing shortest-path trees in IoT sensor networks.
  • Event-Triggered Regulation of Mixed-Autonomy Traffic Under Varying Traffic Conditions, which develops an event-triggered control framework for mitigating congestion in mixed-autonomy traffic systems.
  • Second-Order MPC-Based Distributed Q-Learning, which presents a second-order extension to MPC-based Q-learning with distributed updates, demonstrating improved convergence speed and stability.

Sources

Distributed Q-learning-based Shortest-Path Tree Construction in IoT Sensor Networks

Event-Triggered Regulation of Mixed-Autonomy Traffic Under Varying Traffic Conditions

Asymptotic analysis of cooperative censoring policies in sensor networks

Parallelizing Tree Search with Twice Sequential Monte Carlo

Path Planning through Multi-Agent Reinforcement Learning in Dynamic Environments

Continual Reinforcement Learning for Cyber-Physical Systems: Lessons Learned and Open Challenges

Second-Order MPC-Based Distributed Q-Learning

Built with on top of