The field of machine learning is moving towards a greater emphasis on privacy preservation, with a focus on developing techniques that can protect sensitive information while still allowing for effective model training and deployment. This is being driven by the need to comply with increasingly stringent regulations and to prevent data breaches and other security threats. Researchers are exploring a range of approaches, including differential privacy, federated learning, and secure multi-party computation, to enable the development of privacy-preserving machine learning models. Notable papers in this area include DELTA, which proposes a variational disentangled generative learning framework for privacy-preserving data reprogramming, and PREE, which introduces a prefix-enhanced fingerprint editing framework for large language models. Other noteworthy papers include EverTracer, which presents a novel gray-box fingerprinting framework for large language models, and DCMI, which proposes a differential calibration membership inference attack against retrieval-augmented generation. These papers demonstrate the ongoing efforts to advance the field of privacy-preserving machine learning and to develop new techniques and approaches that can help to protect sensitive information.
Advances in Privacy-Preserving Machine Learning
Sources
PREE: Towards Harmless and Adaptive Fingerprint Editing in Large Language Models via Knowledge Prefix Enhancement
Unlocking the Effectiveness of LoRA-FP for Seamless Transfer Implantation of Fingerprints in Downstream Models
Privacy-Utility Trade-off in Data Publication: A Bilevel Optimization Framework with Curvature-Guided Perturbation
From Evaluation to Defense: Constructing Persistent Edit-Based Fingerprints for Large Language Models