The field of AI research is moving towards a greater emphasis on addressing bias and fairness in AI systems. Recent studies have highlighted the importance of fairness auditing, transparency in dataset documentation, and inclusive model validation pipelines. Researchers are exploring new methods to diagnose and mitigate structural inequities in AI systems, including the use of generative models to augment unbalanced datasets and the development of frameworks to strategically leverage bias. Notable papers in this area include:
- Predictive Representativity: Uncovering Racial Bias in AI-based Skin Cancer Detection, which introduces a framework for fairness auditing and highlights the need for post-hoc fairness auditing and transparency in dataset documentation.
- Should Bias Always be Eliminated, which presents a theoretical analysis and a novel framework for leveraging bias to complement invariant representations during inference.
- Bringing Balance to Hand Shape Classification, which demonstrates the effectiveness of using generative models to address data imbalance in sign language handshape classification.
- Towards Facilitated Fairness Assessment of AI-based Skin Lesion Classifiers Through GenAI-based Image Synthesis, which explores the use of generative AI for fairness assessment in medical imaging.
- Beyond Internal Data: Constructing Complete Datasets for Fairness Testing, which proposes a method for constructing complete synthetic datasets for fairness testing when real data is limited.