Advances in AI Safety and Governance

The field of artificial intelligence (AI) is rapidly evolving, with a growing focus on safety and governance. Recent developments suggest a shift towards more comprehensive and nuanced approaches to managing AI-related risks. Researchers are exploring innovative solutions, such as third-party compliance reviews and multi-stakeholder studies, to ensure that AI systems are designed and deployed in ways that prioritize human well-being and safety. The concept of AI safety is being re-examined, with some arguing that it should be understood as a branch of safety engineering, while others emphasize the need for a more inclusive and flexible definition. Noteworthy papers in this area include: The paper on third-party compliance reviews, which proposes a framework for assessing company adherence to safety frameworks. The paper on regulating algorithmic management, which highlights the challenges and opportunities in designing software to regulate algorithmic management practices. The paper on AI safety, which argues for a simpler and more inclusive definition of AI safety. The paper on privacy risks and preservation methods in explainable AI, which provides a comprehensive review of the existing literature and proposes characteristics of privacy-preserving explanations.

Sources

Third-party compliance reviews for frontier AI safety frameworks

Regulating Algorithmic Management: A Multi-Stakeholder Study of Challenges in Aligning Software and the Law for Workplace Scheduling

What Is AI Safety? What Do We Want It to Be?

Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review

Scoring the European Citizen in the AI Era

The Precautionary Principle and the Innovation Principle: Incompatible Guides for AI Innovation Governance?

Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards

Overcoming the hurdle of legal expertise: A reusable model for smartwatch privacy policies

Built with on top of