Advances in Fairness and Transparency in AI Systems

The field of artificial intelligence is moving towards a greater emphasis on fairness, transparency, and accountability. Recent research has focused on developing methods to mitigate bias and ensure equitable outcomes in various applications, including healthcare, law, and urban planning. One of the key directions is the development of fairness-aware algorithms and frameworks that can detect and correct biases in data and models. Another important area of research is the development of transparent and explainable AI systems that can provide insights into their decision-making processes. Noteworthy papers in this regard include Urban-R1, which proposes a reinforcement learning-based framework to mitigate geospatial biases in urban general intelligence, and FairNet, which introduces a dynamic fairness correction framework to ensure fairness in machine learning models without compromising performance. Additionally, papers like Visibility Allocation Systems and Bias by Design? highlight the importance of considering fairness and transparency in the design and deployment of AI systems.

Sources

Operationalising Extended Cognition: Formal Metrics for Corporate Knowledge and Legal Accountability

Urban-R1: Reinforced MLLMs Mitigate Geospatial Biases for Urban General Intelligence

Visibility Allocation Systems: How Algorithmic Design Shapes Online Visibility and Societal Outcomes

Discrimination, intelligence artificielle et decisions algorithmiques

"Over-the-Hood" AI Inclusivity Bugs and How 3 AI Product Teams Found and Fixed Them

REPAIR Approach for Social-based City Reconstruction Planning in case of natural disasters

FnRGNN: Distribution-aware Fairness in Graph Neural Network

Social World Model-Augmented Mechanism Design Policy Learning

FairNet: Dynamic Fairness Correction without Performance Loss via Contrastive Conditional LoRA

FairGRPO: Fair Reinforcement Learning for Equitable Clinical Reasoning

The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice

Alternatives to the Laplacian for Scalable Spectral Clustering with Group Fairness Constraints

Bias by Design? How Data Practices Shape Fairness in AI Healthcare Systems

Towards the Formalization of a Trustworthy AI for Mining Interpretable Models explOiting Sophisticated Algorithms

Equitable Survival Prediction: A Fairness-Aware Survival Modeling (FASM) Approach

GRACE: GRaph-based Addiction Care prEdiction

Built with on top of