The field of Large Language Model (LLM)-driven agentic AI systems is moving towards enhancing the reliability and scalability of communication, with a particular focus on security and robustness against various types of attacks. Recent developments highlight the importance of designing principled patterns for building AI agents with provable resistance to prompt injection attacks. The use of software design patterns, such as Mediator, Observer, Publish-Subscribe, and Broker, is being revisited to structure agent interactions and optimize data flow. Noteworthy papers include:
- Sentinel, a novel detection model that achieves state-of-the-art performance in detecting prompt injection attacks,
- Polymorphic Prompt Assembling (PPA), a lightweight defense mechanism that protects against prompt injection with near-zero overhead,
- ReAgent, a novel defense against backdoor attacks on LLM-based agents that employs a two-level approach to detect potential backdoors.