Advances in Mitigating Bias in AI Systems

The field of artificial intelligence is moving towards developing more fair and unbiased systems. Recent research has focused on identifying and mitigating biases in large language models, vision-language models, and other AI systems. One of the key directions is the development of frameworks for evaluating and reducing polarization-related biases, as well as detecting and mitigating gender bias in various applications. Additionally, there is a growing interest in developing methods for fair synthetic data generation, information-theoretic fairness, and bounding and estimating fairness from incomplete data. Noteworthy papers in this area include 'BIPOLAR: Polarization-based granular framework for LLM bias evaluation', which proposes a reusable framework for evaluating polarization-related biases in large language models, and 'FairTabGen: Unifying Counterfactual and Causal Fairness in Synthetic Tabular Data Generation', which presents a fairness-aware framework for generating synthetic tabular data. Overall, the field is moving towards developing more inclusive and fair AI systems that can mitigate biases and ensure equitable outcomes.

Sources

BIPOLAR: Polarization-based granular framework for LLM bias evaluation

Bias is a Math Problem, AI Bias is a Technical Problem: 10-year Literature Review of AI/LLM Bias Research Reveals Narrow [Gender-Centric] Conceptions of 'Bias', and Academia-Industry Gap

Group Fairness Meets the Black Box: Enabling Fair Algorithms on Closed LLMs via Post-Processing

Vision-Language Models display a strong gender bias

Is General-Purpose AI Reasoning Sensitive to Data-Induced Cognitive Biases? Dynamic Benchmarking on Typical Software Engineering Dilemmas

Online Anti-sexist Speech: Identifying Resistance to Gender Bias in Political Discourse

FairTabGen: Unifying Counterfactual and Causal Fairness in Synthetic Tabular Data Generation

Information-Theoretic Fairness with A Bounded Statistical Parity Constraint

Beyond Internal Data: Bounding and Estimating Fairness from Incomplete Data

Spot the BlindSpots: Systematic Identification and Quantification of Fine-Grained LLM Biases in Contact Center Summaries

Fair Play in the Newsroom: Actor-Based Filtering Gender Discrimination in Text Corpora

Combating Homelessness Stigma with LLMs: A New Multi-Modal Dataset for Bias Detection

Who Gets the Mic? Investigating Gender Bias in the Speaker Assignment of a Speech-LLM

When Algorithms Infer Gender: Revisiting Computational Phenotyping with Electronic Health Records Data

Bridging Research Gaps Between Academic Research and Legal Investigations of Algorithmic Discrimination

Revisiting Pre-processing Group Fairness: A Modular Benchmarking Framework

Fairness for the People, by the People: Minority Collective Action

HebID: Detecting Social Identities in Hebrew-language Political Text

The Enemy from Within: A Study of Political Delegitimization Discourse in Israeli Political Speech

Reading Between the Lines: A Study of Thematic Bias in Book Recommender Systems

Built with on top of