Federated Unlearning and Machine Unlearning

The field of federated learning and machine unlearning is rapidly advancing, with a focus on developing effective methods for removing the influence of forgotten data from models while maintaining their performance and privacy. Researchers are exploring various approaches, including gradient-based methods, functional encryption, and federated unlearning frameworks, to address the challenges of unlearning in different settings, such as large language models and federated learning. Notable papers in this area include: EFU, which proposes a cryptographically enforced federated unlearning framework that enables clients to initiate unlearning while concealing its occurrence from the server. Oblivionis, which introduces a lightweight learning and unlearning framework for federated large language models that enables clients to selectively remove specific private data during training.

Sources

LLM Unlearning using Gradient Ratio-Based Influence Estimation and Noise Injection

Label Inference Attacks against Federated Unlearning

EFU: Enforcing Federated Unlearning via Functional Encryption

Oblivionis: A Lightweight Learning and Unlearning Framework for Federated Large Language Models

FedShard: Federated Unlearning with Efficiency Fairness and Performance Fairness

Built with on top of