The field of machine learning is moving towards a greater emphasis on fairness and transparency, with a focus on developing techniques and tools to detect and mitigate bias in AI systems. Researchers are exploring new approaches to fairness, including human-in-the-loop methods and context-aware bias removal. The development of fairness APIs and logging requirements for continuous auditing are also key areas of research. Notably, the FairLoop tool enables human-guided bias mitigation in neural network-based prediction models, and the A Human-In-The-Loop Approach for Improving Fairness in Predictive Business Process Monitoring paper proposes a novel approach for identifying and rectifying biased decisions. The Logging Requirement for Continuous Auditing of Responsible Machine Learning-based Applications paper highlights the need for enhanced logging practices and tooling to support the development of auditable and transparent ML systems.