Mitigating Bias in AI Systems

The field of AI research is moving towards addressing the issue of bias in machine learning models, with a focus on developing innovative methods to mitigate bias and ensure fairness. Recent studies have highlighted the importance of considering intersectional biases, such as those affecting people with disabilities, and the need for more nuanced approaches to debiasing. The use of large language models (LLMs) has been explored as a means to generate counterfactual examples and reduce bias, with promising results. Additionally, researchers are investigating the impact of human label variation on model fairness and developing new methods to preserve diversity in human annotations.

Noteworthy papers in this area include: Large Language Models for Imbalanced Classification, which proposes a novel LLM-based oversampling method to enhance diversity in synthetic samples. Fairness Without Labels, which introduces a pseudo-balancing strategy for mitigating biases in semi-supervised learning. From Detection to Mitigation, which presents a comprehensive bias detection and mitigation framework for deep learning models in chest X-ray diagnosis.

Sources

Large Language Models for Imbalanced Classification: Diversity makes the difference

Fairness Without Labels: Pseudo-Balancing for Bias Mitigation in Face Gender Classification

From Detection to Mitigation: Addressing Bias in Deep Learning Models for Chest X-Ray Diagnosis

ABLEIST: Intersectional Disability Bias in LLM-Generated Hiring Scenarios

On the Interplay between Human Label Variation and Model Fairness

From Noise to Signal to Selbstzweck: Reframing Human Label Variation in the Era of Post-training in NLP

MEDEQUALQA: Evaluating Biases in LLMs with Counterfactual Reasoning

Unmasking Hiring Bias: Platform Data Analysis and Controlled Experiments on Bias in Online Freelance Marketplaces via RAG-LLM Generated Contents

Stable LLM Ensemble: Interaction between Example Representativeness and Diversity

LLM-Guided Synthetic Augmentation (LGSA) for Mitigating Bias in AI Systems

Rebalancing with Calibrated Sub-classes (RCS): An Enhanced Approach for Robust Imbalanced Classification

Robust or Suggestible? Exploring Non-Clinical Induction in LLM Drug-Safety Decisions

Built with on top of