Addressing Bias and Fairness in AI Systems

The field of artificial intelligence is moving towards a more inclusive and participatory approach, with a focus on mitigating the risks and biases associated with AI algorithms. Researchers are exploring new methods for promoting fairness and transparency in AI systems, including the development of more diverse and representative training data, as well as the creation of new frameworks for evaluating and addressing bias. A key area of research is the examination of how AI systems can be designed to be more sensitive to the needs and values of diverse cultures and communities. Noteworthy papers in this area include: Co-Producing AI: Toward an Augmented, Participatory Lifecycle, which introduces a new lifecycle for AI production that centers co-production, diversity, equity, inclusion, and multidisciplinary collaboration. How Deep Is Representational Bias in LLMs? The Cases of Caste and Religion, which conducts a systematic audit of a large language model to reveal how deeply encoded representational biases are and how they extend to less-explored dimensions of identity.

Sources

Co-Producing AI: Toward an Augmented, Participatory Lifecycle

Exploring Fairness across Fine-Grained Attributes in Large Vision-Language Models

How Deep Is Representational Bias in LLMs? The Cases of Caste and Religion

Trustworthiness of Legal Considerations for the Use of LLMs in Education

I Think, Therefore I Am Under-Qualified? A Benchmark for Evaluating Linguistic Shibboleth Detection in LLM Hiring Evaluations

Whose Truth? Pluralistic Geo-Alignment for (Agentic) AI

The World According to LLMs: How Geographic Origin Influences LLMs' Entity Deduction Capabilities

Built with on top of