The field of AI research is moving towards a greater emphasis on governance, transparency, and accountability. Recent developments have highlighted the need for standardized frameworks and tools to support the assessment and regulation of AI systems. This trend is driven by the increasing adoption of AI in high-stakes domains, such as finance and healthcare, where the lack of transparency and accountability can have significant consequences. Researchers are working to address these challenges by developing innovative solutions, such as modular frameworks for AI assessment and regulatory-grade databases for incident reporting. Notable papers in this area include: The Sandbox Configurator, which proposes a modular framework for supporting technical assessment in AI regulatory sandboxes. XR Blocks, which presents a cross-platform framework for accelerating human-centered AI and XR innovation. Bubble, Bubble, AI's Rumble, which proposes a global database for AI incident reporting in financial markets. Towards a Framework for Supporting the Ethical and Regulatory Certification of AI Systems, which outlines a comprehensive framework for integrating regulatory compliance, ethical standards, and transparency into AI systems. An Analysis of the New EU AI Act and A Proposed Standardization Framework for Machine Learning Fairness, which argues for a more tailored regulatory framework to enhance the new EU AI regulation. TAIBOM: Bringing Trustworthiness to AI-Enabled Systems, which introduces a novel framework for extending Software Bills of Materials principles to the AI domain.