Advances in Privacy-Preserving Large Language Models

The field of Large Language Models (LLMs) is moving towards developing more privacy-preserving and secure models. Recent research has highlighted the importance of balancing task efficacy with privacy understanding and preservation in collaborative settings. One of the key challenges is reducing unnecessary privacy exposure while maintaining task accuracy. To address this, researchers are exploring innovative approaches such as collaborative frameworks, reinforcement learning, and multi-agent evaluation. These approaches aim to protect sensitive information, including personally identifiable information (PII), and ensure the safe reuse of clinical notes and other sensitive data. Noteworthy papers in this area include MAGPIE, which introduces a novel benchmark for evaluating privacy understanding and preservation in multi-agent collaborative scenarios, and CORE, which proposes a collaborative framework to reduce UI exposure in mobile agents. Additionally, PrivacyPAD presents a reinforcement learning framework for dynamic privacy-aware delegation, and TEAM-PHI introduces a multi-agent evaluation and selection framework for PHI de-identification models. LOGICAL also presents a locally deployable PII removal system built on a fine-tuned Generalist and Lightweight Named Entity Recognition model.

Sources

MAGPIE: A benchmark for Multi-AGent contextual PrIvacy Evaluation

CORE: Reducing UI Exposure in Mobile Agents via Collaboration Between Cloud and Local LLMs

PrivacyPAD: A Reinforcement Learning Framework for Dynamic Privacy-Aware Delegation

Towards Automatic Evaluation and Selection of PHI De-identification Models via Multi-Agent Collaboration

Local Obfuscation by GLINER for Impartial Context Aware Lineage: Development and evaluation of PII Removal system

Built with on top of