Human-AI Collaboration and Fairness

The field of human-AI collaboration is moving towards a deeper understanding of the complex interactions between humans and AI systems, with a focus on fairness and bias. Researchers are investigating how gender bias alignment affects fairness perceptions and reliance on AI recommendations, and how to design AI systems that can mitigate these biases. Another area of research is the development of methods for familiarizing humans with AI teammates, including the use of documentation, training, and exploratory interaction. The importance of algorithmic design choices in achieving organizational diversity goals is also being highlighted, with studies showing that enforcing equal representation at the shortlist stage does not necessarily translate into more diverse final hires. Furthermore, the use of counterfactual explanations is being explored as a way to prevent the adoption of algorithmic bias in human decision-making. Noteworthy papers include:

  • Algorithmic Hiring and Diversity, which proposes a complementary algorithmic approach to diversify shortlists and enhance gender diversity in final hires.
  • When Bias Backfires, which examines the modulatory role of counterfactual explanations on the adoption of algorithmic bias in human decision-making.

Sources

It's only fair when I think it's fair: How Gender Bias Alignment Undermines Distributive Fairness in Human-AI Collaboration

Model Cards for AI Teammates: Comparing Human-AI Team Familiarization Methods for High-Stakes Environments

Algorithmic Hiring and Diversity: Reducing Human-Algorithm Similarity for Better Outcomes

When Bias Backfires: The Modulatory Role of Counterfactual Explanations on the Adoption of Algorithmic Bias in XAI-Supported Human Decision-Making

Fairness and Efficiency in Human-Agent Teams: An Iterative Algorithm Design Approach

Built with on top of