Reinforcement Learning for Complex Decision-Making

The field of reinforcement learning is moving towards addressing complex decision-making challenges in various domains, including agriculture, aerial combat, and network control. Researchers are focusing on developing more transparent, explainable, and trustworthy models that can align with human values and practices. This shift is driven by the need for more effective human-AI collaboration and the recognition that technical performance is not the only factor in successful AI adoption. Noteworthy papers include:

  • Developing and Integrating Trust Modeling into Multi-Objective Reinforcement Learning for Intelligent Agricultural Management, which introduces a novel trust model to improve AI adoption in agriculture.
  • Explaining Strategic Decisions in Multi-Agent Reinforcement Learning for Aerial Combat Tactics, which adapts explainability techniques to enhance transparency in aerial combat scenarios.
  • Embedded Mean Field Reinforcement Learning for Perimeter-defense Game, which proposes an Embedded Mean-Field Actor-Critic framework for large-scale heterogeneous perimeter-defense games.
  • Interpretable Reinforcement Learning for Load Balancing using Kolmogorov-Arnold Networks, which uses Kolmogorov-Arnold Networks for interpretable reinforcement learning in network control.

Sources

Developing and Integrating Trust Modeling into Multi-Objective Reinforcement Learning for Intelligent Agricultural Management

Explaining Strategic Decisions in Multi-Agent Reinforcement Learning for Aerial Combat Tactics

Embedded Mean Field Reinforcement Learning for Perimeter-defense Game

Interpretable Reinforcement Learning for Load Balancing using Kolmogorov-Arnold Networks

Built with on top of