The field of AI security and privacy is rapidly evolving, with a growing focus on protecting against threats such as model extraction attacks, indirect prompt injection, and tool poisoning. Recent research has highlighted the importance of developing robust defense strategies to ensure the security and integrity of AI systems. Notably, the development of novel attack surfaces, such as side-channel attacks on Mixture-of-Experts architectures, has emphasized the need for ongoing innovation in AI security. Furthermore, the creation of comprehensive benchmarks, such as MCPSecBench and MCPTox, has facilitated the systematic evaluation of AI systems' security and robustness. Noteworthy papers include MCP-Guard, which proposes a robust defense framework for Model Context Protocol integrity, and MoEcho, which introduces a side-channel analysis-based attack surface that compromises user privacy in Mixture-of-Experts-based systems.
Advancements in AI Security and Privacy
Sources
MCP-Guard: A Defense Framework for Model Context Protocol Integrity in Large Language Model Applications
CrossTrace: Efficient Cross-Thread and Cross-Service Span Correlation in Distributed Tracing for Microservices
WebGeoInfer: A Structure-Free and Multi-Stage Framework for Geolocation Inference of Devices Exposing Information
Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous