The field of artificial intelligence is moving towards a greater emphasis on fairness and bias detection. Researchers are investigating how AI systems can perpetuate and amplify existing social biases, and are developing new methods to detect and mitigate these biases. One of the key areas of focus is on the development of more inclusive and diverse datasets, as well as the creation of new fairness metrics and evaluation frameworks. Additionally, there is a growing interest in the use of crowdsourced human judgment and perception-driven approaches to detect bias in AI systems. Noteworthy papers in this area include:
- Perception-Driven Bias Detection in Machine Learning via Crowdsourced Visual Judgment, which introduces a novel framework for bias detection that leverages crowdsourced human judgment.
- Evaluating Fairness and Mitigating Bias in Machine Learning: A Novel Technique using Tensor Data and Bayesian Regression, which proposes a new technique for evaluating fairness in machine learning for image classification tasks.
- DETONATE: A Benchmark for Text-to-Image Alignment and Kernelized Direct Preference Optimization, which introduces a large-scale benchmark for text-to-image alignment and kernelized direct preference optimization.