The field of human-AI collaboration is moving towards a deeper understanding of the complex interactions between humans and AI systems, with a focus on fairness and bias. Researchers are investigating how gender bias alignment affects fairness perceptions and reliance on AI recommendations, and how to design AI systems that can mitigate these biases. Another area of research is the development of methods for familiarizing humans with AI teammates, including the use of documentation, training, and exploratory interaction. The importance of algorithmic design choices in achieving organizational diversity goals is also being highlighted, with studies showing that enforcing equal representation at the shortlist stage does not necessarily translate into more diverse final hires. Furthermore, the use of counterfactual explanations is being explored as a way to prevent the adoption of algorithmic bias in human decision-making. Noteworthy papers include:
- Algorithmic Hiring and Diversity, which proposes a complementary algorithmic approach to diversify shortlists and enhance gender diversity in final hires.
- When Bias Backfires, which examines the modulatory role of counterfactual explanations on the adoption of algorithmic bias in human decision-making.