Fairness and Safety in Multi-Agent Systems

The field of multi-agent systems is moving towards incorporating fairness and safety guarantees into decision-making processes. Researchers are developing novel frameworks and algorithms that balance efficiency and fairness, such as incentives-based approaches and game-theoretic models. These advancements have the potential to improve outcomes in complex systems, including resource allocation and reinforcement learning. Noteworthy papers in this area include: Online Multi-Class Selection with Group Fairness Guarantee, which introduces a novel lossless rounding scheme to ensure fairness across classes. Guardian: Decoupling Exploration from Safety in Reinforcement Learning, which proposes a framework that decouples policy optimization from safety enforcement. A General Incentives-Based Framework for Fairness in Multi-agent Resource Allocation, which leverages action-value functions to balance efficiency and fairness. The Oversight Game: Learning to Cooperatively Balance an AI Agent's Safety and Autonomy, which models the interaction between an agent and a human as a two-player Markov Game to provide an alignment guarantee.

Sources

Online Multi-Class Selection with Group Fairness Guarantee

System-Theoretic Analysis of Dynamic Generalized Nash Equilibrium Problems -- Turnpikes and Dissipativity

Do You Trust the Process?: Modeling Institutional Trust for Community Adoption of Reinforcement Learning Policies

Guardian: Decoupling Exploration from Safety in Reinforcement Learning

Linear effects, exceptions, and resource safety: a Curry-Howard correspondence for destructors

LRT-Diffusion: Calibrated Risk-Aware Guidance for Diffusion Policies

A Game-Theoretic Spatio-Temporal Reinforcement Learning Framework for Collaborative Public Resource Allocation

A General Incentives-Based Framework for Fairness in Multi-agent Resource Allocation

The Oversight Game: Learning to Cooperatively Balance an AI Agent's Safety and Autonomy

Built with on top of