The field of Large Language Models (LLMs) is moving towards developing more privacy-preserving and secure models. Recent research has highlighted the importance of balancing task efficacy with privacy understanding and preservation in collaborative settings. One of the key challenges is reducing unnecessary privacy exposure while maintaining task accuracy. To address this, researchers are exploring innovative approaches such as collaborative frameworks, reinforcement learning, and multi-agent evaluation. These approaches aim to protect sensitive information, including personally identifiable information (PII), and ensure the safe reuse of clinical notes and other sensitive data. Noteworthy papers in this area include MAGPIE, which introduces a novel benchmark for evaluating privacy understanding and preservation in multi-agent collaborative scenarios, and CORE, which proposes a collaborative framework to reduce UI exposure in mobile agents. Additionally, PrivacyPAD presents a reinforcement learning framework for dynamic privacy-aware delegation, and TEAM-PHI introduces a multi-agent evaluation and selection framework for PHI de-identification models. LOGICAL also presents a locally deployable PII removal system built on a fine-tuned Generalist and Lightweight Named Entity Recognition model.