Developments in Machine Unlearning and Membership Inference

The field of machine learning is moving towards a greater emphasis on data privacy and security, with a focus on developing methods for machine unlearning and protecting against membership inference attacks. Recent research has highlighted the challenges of developing reliable membership inference tests, particularly under relaxed definitions of membership. Furthermore, the development of efficient machine unlearning algorithms is an active area of research, with a focus on providing unlearning guarantees against system-aware attackers. Noteworthy papers in this area include:

  • A work that demonstrates the vulnerability of membership inference tests to poisoning attacks, highlighting a trade-off between accuracy and robustness.
  • A proposal for system-aware unlearning algorithms that prioritize efficiency and security, with theoretical analysis of tradeoffs between deletion capacity, accuracy, memory, and computation time.
  • A framework for measuring sample-level unlearning completeness, which quantifies the extent to which a model has unlearned specific data influences.
  • A novel privacy attack that infers whether a data sample has been unlearned, following a strict threat model with limited access to the target model.

Sources

What Really is a Member? Discrediting Membership Inference via Poisoning

System-Aware Unlearning Algorithms: Use Lesser, Forget Faster

Towards Lifecycle Unlearning Commitment Management: Measuring Sample-level Unlearning Completeness

Apollo: A Posteriori Label-Only Membership Inference Attack Towards Machine Unlearning

Built with on top of