Advancements in Secure LLM Agent Communication

The field of Large Language Model (LLM)-driven agentic AI systems is moving towards enhancing the reliability and scalability of communication, with a particular focus on security and robustness against various types of attacks. Recent developments highlight the importance of designing principled patterns for building AI agents with provable resistance to prompt injection attacks. The use of software design patterns, such as Mediator, Observer, Publish-Subscribe, and Broker, is being revisited to structure agent interactions and optimize data flow. Noteworthy papers include:

  • Sentinel, a novel detection model that achieves state-of-the-art performance in detecting prompt injection attacks,
  • Polymorphic Prompt Assembling (PPA), a lightweight defense mechanism that protects against prompt injection with near-zero overhead,
  • ReAgent, a novel defense against backdoor attacks on LLM-based agents that employs a two-level approach to detect potential backdoors.

Sources

Survey of LLM Agent Communication with MCP: A Software Design Pattern Centric Review

Sentinel: SOTA model to protect against prompt injections

To Protect the LLM Agent Against the Prompt Injection Attack with Polymorphic Prompt

Your Agent Can Defend Itself against Backdoor Attacks

Design Patterns for Securing LLM Agents against Prompt Injections

Effective Red-Teaming of Policy-Adherent Agents

LLMail-Inject: A Dataset from a Realistic Adaptive Prompt Injection Challenge

Disclosure Audits for LLM Agents

Agentic Semantic Control for Autonomous Wireless Space Networks: Extending Space-O-RAN with MCP-Driven Distributed Intelligence

Built with on top of