The field of artificial intelligence is moving towards a greater emphasis on privacy and accountability, with a focus on developing secure and responsible AI systems. Differential privacy is a key area of research, with ongoing efforts to integrate it into machine learning models and evaluate its effectiveness in practice. Synthetic data is also becoming increasingly important, with implications for privacy and policymaking that need to be addressed. Membership inference attacks are being used as tools for privacy assessment and auditing, and researchers are working to improve their reliability and effectiveness. Noteworthy papers include:
- Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble, which proposes an ensemble framework to address disparities in membership inference attacks.
- Frequency-Calibrated Membership Inference Attacks on Medical Image Diffusion Models, which introduces a frequency-calibrated reconstruction error method for membership inference attacks on medical image diffusion models.