Differential Privacy and Synthetic Data in AI

The field of artificial intelligence is moving towards a greater emphasis on privacy and accountability, with a focus on developing secure and responsible AI systems. Differential privacy is a key area of research, with ongoing efforts to integrate it into machine learning models and evaluate its effectiveness in practice. Synthetic data is also becoming increasingly important, with implications for privacy and policymaking that need to be addressed. Membership inference attacks are being used as tools for privacy assessment and auditing, and researchers are working to improve their reliability and effectiveness. Noteworthy papers include:

  • Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble, which proposes an ensemble framework to address disparities in membership inference attacks.
  • Frequency-Calibrated Membership Inference Attacks on Medical Image Diffusion Models, which introduces a frequency-calibrated reconstruction error method for membership inference attacks on medical image diffusion models.

Sources

Differential Privacy in Machine Learning: From Symbolic AI to LLMs

The Synthetic Mirror -- Synthetic Data at the Age of Agentic AI

Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble

SoK: Privacy-Enhancing Technologies in Artificial Intelligence

Frequency-Calibrated Membership Inference Attacks on Medical Image Diffusion Models

Enhancing One-run Privacy Auditing with Quantile Regression-Based Membership Inference

Built with on top of