The field of artificial intelligence is moving towards a greater emphasis on governance and safety, with a focus on developing frameworks and mechanisms that can ensure the reliable and secure operation of AI systems. This is driven by the need to address the potential risks and challenges associated with the increasing use of AI in critical applications. Recent research has highlighted the importance of developing scalable and decoupled governance approaches that can regulate AI systems at runtime, without altering their internal workings. Additionally, there is a growing recognition of the need to address the epistemic challenges associated with AI, including the trade-off between certainty and scope, and the development of frameworks that can provide provable safety guarantees. Notable papers in this area include: A Scalable Framework for the Management of STPA Requirements, which introduces a practical solution for managing safety requirements in complex systems. Governance-as-a-Service: A Multi-Agent Framework for AI System Compliance and Policy Enforcement, which proposes a modular and policy-driven enforcement layer for regulating AI systems. Governable AI: Provable Safety Under Extreme Threat Models, which presents a framework for ensuring the safety and security of AI systems under extreme threat models.