The field of artificial intelligence is moving towards developing more fair and unbiased systems. Recent research has focused on identifying and mitigating biases in large language models, vision-language models, and other AI systems. One of the key directions is the development of frameworks for evaluating and reducing polarization-related biases, as well as detecting and mitigating gender bias in various applications. Additionally, there is a growing interest in developing methods for fair synthetic data generation, information-theoretic fairness, and bounding and estimating fairness from incomplete data. Noteworthy papers in this area include 'BIPOLAR: Polarization-based granular framework for LLM bias evaluation', which proposes a reusable framework for evaluating polarization-related biases in large language models, and 'FairTabGen: Unifying Counterfactual and Causal Fairness in Synthetic Tabular Data Generation', which presents a fairness-aware framework for generating synthetic tabular data. Overall, the field is moving towards developing more inclusive and fair AI systems that can mitigate biases and ensure equitable outcomes.
Advances in Mitigating Bias in AI Systems
Sources
Bias is a Math Problem, AI Bias is a Technical Problem: 10-year Literature Review of AI/LLM Bias Research Reveals Narrow [Gender-Centric] Conceptions of 'Bias', and Academia-Industry Gap
Is General-Purpose AI Reasoning Sensitive to Data-Induced Cognitive Biases? Dynamic Benchmarking on Typical Software Engineering Dilemmas
Spot the BlindSpots: Systematic Identification and Quantification of Fine-Grained LLM Biases in Contact Center Summaries
When Algorithms Infer Gender: Revisiting Computational Phenotyping with Electronic Health Records Data