Advances in Secure AI Agent Execution and Governance

The field of AI agent execution and governance is rapidly evolving, with a focus on developing secure and scalable solutions for autonomous agents. Recent developments have highlighted the need for innovative approaches to access control, policy enforcement, and identity management. Researchers are exploring new frameworks and protocols to ensure the secure execution of AI agents, including the use of declarative policy mechanisms, hybrid inference protocols, and model-driven norm-enforcing tools. Noteworthy papers in this area include: AgentBound, which introduces a novel access control framework for MCP servers, Policy-Aware Generative AI, which presents a policy-aware controller for safe and auditable data access governance, SLIP-SEC, which formalizes secure protocols for model IP protection, Agentic Moderation, which leverages specialized agents to defend multimodal systems against jailbreak attacks, AAGATE, which operationalizes the NIST AI Risk Management Framework for agentic AI governance. These advancements have the potential to significantly impact the development of secure and trustworthy AI systems.

Sources

Securing AI Agent Execution

Policy-Aware Generative AI for Safe, Auditable Data Access Governance

Managing Administrative Law Cases using an Adaptable Model-driven Norm-enforcing Tool

SLIP-SEC: Formalizing Secure Protocols for Model IP Protection

Agentic Moderation: Multi-Agent Design for Safer Vision-Language Models

Identity Management for Agentic AI: The new frontier of authorization, authentication, and security for an AI agent world

AAGATE: A NIST AI RMF-Aligned Governance Platform for Agentic AI

Who Grants the Agent Power? Defending Against Instruction Injection via Task-Centric Access Control

GraphCompliance: Aligning Policy and Context Graphs for LLM-Based Regulatory Compliance

Delegated Authorization for Agents Constrained to Semantic Task-to-Scope Matching

Built with on top of