Advancements in Autonomous Systems and Multi-Agent Reinforcement Learning

The field of autonomous systems and multi-agent reinforcement learning is moving towards increased integration of human expertise and safety guarantees. Recent developments have focused on enhancing model performance and adaptability through human-in-the-loop approaches, as well as ensuring safe and efficient decision-making in complex scenarios. Noteworthy papers in this area include:

  • Interactive Double Deep Q-network, which introduces a novel approach to integrating human insights into reinforcement learning training processes.
  • Resolving Conflicting Constraints in Multi-Agent Reinforcement Learning with Layered Safety, which proposes a method for safe multi-agent coordination by combining MARL with safety filters. These advances have significant implications for the development of autonomous vehicles, air traffic control, and other safety-critical applications.

Sources

Interactive Double Deep Q-network: Integrating Human Interventions and Evaluative Predictions in Reinforcement Learning of Autonomous Driving

Safe and Efficient CAV Lane Changing using Decentralised Safety Shields

Pathfinders in the Sky: Formal Decision-Making Models for Collaborative Air Traffic Control in Convective Weather

Enhancing Safety Standards in Automated Systems Using Dynamic Bayesian Networks

Resolving Conflicting Constraints in Multi-Agent Reinforcement Learning with Layered Safety

Multi-Agent Reinforcement Learning Scheduling to Support Low Latency in Teleoperated Driving

Multi-Agent Reinforcement Learning-based Cooperative Autonomous Driving in Smart Intersections

Built with on top of