The field of artificial intelligence is moving towards increased transparency and accountability, with a focus on developing frameworks and systems that can provide verifiable evidence of AI decision-making processes. This is driven by the need to address the risks associated with AI systems, including bias, security vulnerabilities, and lack of explainability. Researchers are working on developing novel approaches to AI governance, including multi-stakeholder frameworks, risk assessment and management methodologies, and artifact-centric AI agent paradigms. These approaches aim to provide a principled foundation for standardized, trustworthy, and machine-verifiable AI risk documentation. Notable papers in this area include: A Workflow for Full Traceability of AI Decisions, which presents a running workflow supporting the generation of tamper-proof, verifiable and exhaustive traces of AI decisions. AI Bill of Materials and Beyond: Systematizing Security Assurance through the AI Risk Scanning (AIRS) Framework, which introduces a threat-model-based framework designed to operationalize AI assurance. The Last Vote: A Multi-Stakeholder Framework for Language Model Governance, which presents a comprehensive framework to address the full spectrum of risks that AI poses to democratic societies. BIOMERO 2.0: end-to-end FAIR infrastructure for bioimaging data import, analysis, and provenance, which integrates data import, preprocessing, analysis, and workflow monitoring through an OMERO.web plugin and containerized components. It's a Feature, Not a Bug: Secure and Auditable State Rollback for Confidential Cloud Applications, which presents a general-purpose security framework that preserves rollback protection while enabling policy-authorized legitimate rollbacks of application binaries, configuration, and data. MAIF: Enforcing AI Trust and Provenance with an Artifact-Centric Agentic Paradigm, which proposes an artifact-centric AI agent paradigm where behavior is driven by persistent, verifiable data artifacts rather than ephemeral tasks. Identifying the Supply Chain of AI for Trustworthiness and Risk Management in Critical Applications, which surveys the current state of AI risk assessment and management, with a focus on the supply chain of AI and risks relating to the behavior and outputs of the AI system. The Loss of Control Playbook: Degrees, Dynamics, and Preparedness, which addresses the absence of an actionable definition for Loss of Control (LoC) in AI systems by developing a novel taxonomy and preparedness framework.