Advances in Fairness and Transparency in Machine Learning

The field of machine learning is moving towards a greater emphasis on fairness and transparency, with a focus on developing techniques and tools to detect and mitigate bias in AI systems. Researchers are exploring new approaches to fairness, including human-in-the-loop methods and context-aware bias removal. The development of fairness APIs and logging requirements for continuous auditing are also key areas of research. Notably, the FairLoop tool enables human-guided bias mitigation in neural network-based prediction models, and the A Human-In-The-Loop Approach for Improving Fairness in Predictive Business Process Monitoring paper proposes a novel approach for identifying and rectifying biased decisions. The Logging Requirement for Continuous Auditing of Responsible Machine Learning-based Applications paper highlights the need for enhanced logging practices and tooling to support the development of auditable and transparent ML systems.

Sources

Applications and Challenges of Fairness APIs in Machine Learning Software

A Human-In-The-Loop Approach for Improving Fairness in Predictive Business Process Monitoring

Logging Requirement for Continuous Auditing of Responsible Machine Learning-based Applications

FairLoop: Software Support for Human-Centric Fairness in Predictive Business Process Monitoring

From stand-up to start-up: exploring entrepreneurship competences and STEM womens intention

Dynamics of Gender Bias in Software Engineering

Built with on top of