Insider Threat Detection and Privacy Protection in AI Systems

The field of insider threat detection is moving towards more advanced and innovative approaches, leveraging techniques such as multivariate behavioral signal decomposition, cross-modal fusion, and large language models (LLMs) to improve detection accuracy and efficiency. These methods aim to capture the complex and dynamic nature of insider threats, which often involve subtle and context-dependent behaviors. Furthermore, there is a growing focus on protecting privacy in AI systems, particularly in the context of LLMs, where multi-agent frameworks and simulation-based approaches are being explored to enhance contextual privacy and detect disinformation. Notable papers in this area include: Log2Sig, which proposes a frequency-aware insider threat detection framework, and DMFI, which integrates semantic inference with behavior-aware fine-tuning for LLM-based insider threat detection. Additionally, papers such as ScamAgents and Chimera highlight the potential risks and benefits of using LLMs in security applications, while 1-2-3 Check and MCP-Orchestrated Multi-Agent System demonstrate the effectiveness of multi-agent systems in enhancing privacy and detecting disinformation.

Sources

Log2Sig: Frequency-Aware Insider Threat Detection via Multivariate Behavioral Signal Decomposition

MambaITD: An Efficient Cross-Modal Mamba Network for Insider Threat Detection

DMFI: Dual-Modality Fine-Tuning and Inference Framework for LLM-Based Insider Threat Detection

ScamAgents: How AI Agents Can Simulate Human-Level Scam Calls

1-2-3 Check: Enhancing Contextual Privacy in LLM via Multi-Agent Reasoning

Chimera: Harnessing Multi-Agent LLMs for Automatic Insider Threat Simulation

MCP-Orchestrated Multi-Agent System for Automated Disinformation Detection

Searching for Privacy Risks in LLM Agents via Simulation

Built with on top of