Addressing Biases in Large Language Models

The field of Large Language Models (LLMs) is moving towards addressing the inherent biases and fairness concerns in their applications. Recent studies have highlighted the importance of evaluating and mitigating biases in LLMs, particularly in areas such as game playing, hiring evaluations, and recommender systems. The use of LLMs in these domains has raised concerns about fairness, trust, and reliability, and researchers are working to develop methods to detect and reduce biases. Noteworthy papers in this area include: Sequential Cohort Selection, which studies the problem of fair cohort selection from an unknown population. Invisible Filters: Cultural Bias in Hiring Evaluations Using Large Language Models, which examines the potential biases in AI-driven hiring evaluation across cultures. SMITE: Enhancing Fairness in LLMs through Optimal In-Context Example Selection via Dynamic Validation, which introduces a novel approach to enhancing LLM performance and fairness through dynamic validation. FAIRGAMER: Evaluating Biases in the Application of Large Language Models to Video Games, which presents a bias evaluation benchmark for LLMs in video game scenarios. Revealing Potential Biases in LLM-Based Recommender Systems in the Cold Start Setting, which evaluates fairness in zero-context recommendation using a modular pipeline.

Sources

Format as a Prior: Quantifying and Analyzing Bias in LLMs for Heterogeneous Data

Ethical Considerations of Large Language Models in Game Playing

Sequential Cohort Selection

Invisible Filters: Cultural Bias in Hiring Evaluations Using Large Language Models

SMITE: Enhancing Fairness in LLMs through Optimal In-Context Example Selection via Dynamic Validation

FAIRGAMER: Evaluating Biases in the Application of Large Language Models to Video Games

Just Because You Can, Doesn't Mean You Should: LLMs for Data Fitting

Revealing Potential Biases in LLM-Based Recommender Systems in the Cold Start Setting

Built with on top of