The field of artificial intelligence (AI) is rapidly evolving, with a growing focus on safety and governance. Recent developments suggest a shift towards more comprehensive and nuanced approaches to managing AI-related risks. Researchers are exploring innovative solutions, such as third-party compliance reviews and multi-stakeholder studies, to ensure that AI systems are designed and deployed in ways that prioritize human well-being and safety. The concept of AI safety is being re-examined, with some arguing that it should be understood as a branch of safety engineering, while others emphasize the need for a more inclusive and flexible definition. Noteworthy papers in this area include: The paper on third-party compliance reviews, which proposes a framework for assessing company adherence to safety frameworks. The paper on regulating algorithmic management, which highlights the challenges and opportunities in designing software to regulate algorithmic management practices. The paper on AI safety, which argues for a simpler and more inclusive definition of AI safety. The paper on privacy risks and preservation methods in explainable AI, which provides a comprehensive review of the existing literature and proposes characteristics of privacy-preserving explanations.