The field of AI governance and cloud security is rapidly evolving, with a focus on developing innovative solutions to ensure the safe and trustworthy development of AI systems. Researchers are exploring new approaches to automate risk management frameworks, improve cloud security modeling, and enhance explainability in legal AI systems. The use of semantic models, machine-readable evidence, and continuous reporting is becoming increasingly important in cloud security. Meanwhile, the development of governance frameworks for AI systems in the legal sector is gaining traction, with a focus on ensuring verifiable compliance with regulations such as the EU AI Act. Noteworthy papers in this area include: Automating the RMF: Lessons from the FedRAMP 20x Pilot, which presents a case study on the use of Key Security Indicators and automated evidence pipelines to streamline authorization and improve cyber risk management. Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives, which promotes computational models of arguments to provide legally relevant explanations and enhance transparency in AI decision-making. Reproducibility: The New Frontier in AI Governance, which outlines the importance of adopting stricter reproducibility guidelines in AI research to improve consensus on the AI risk landscape and enable effective AI governance.
Advancements in AI Governance and Cloud Security
Sources
Proceedings of the Access InContext Workshop @ CHI'25 Conference on Human Factors in Computing Systems
Proceedings Twentieth International Workshop on Logical Frameworks and Meta-Languages: Theory and Practice