The field of text privacy and security is moving towards more robust and nuanced approaches to protecting sensitive information. Recent developments have focused on improving the evaluation of privacy protection in text, with a emphasis on reconciling different notions of privacy and developing more effective metrics. Additionally, there is a growing interest in sensitivity-aware approaches to privacy protection, which allocate noise based on the sensitivity of individual personally identifiable information (PII). Another area of research is exploring the use of artificial intelligence (AI) for document redaction, with a focus on balancing technological automation with human oversight.
Noteworthy papers include: SA-ADP, which proposes a sensitivity-aware approach to differential privacy for large language models, achieving strong privacy protection without degrading model utility. Randomized Masked Finetuning, which introduces a novel privacy-preserving fine-tuning technique that reduces PII memorization in large language models while minimizing performance impact. Towards Contextual Sensitive Data Detection, which refines and broadens the definition of sensitive data and introduces mechanisms for contextual sensitive data detection that consider the broader context of a dataset. ConsentDiff at Scale, which provides a longitudinal view of web privacy policy changes and UI frictions, enabling comparisons over time, regions, and verticals.