Advances in Machine Unlearning and Continual Learning

The field of machine learning is moving towards developing more privacy-preserving and adaptable models. Recent research has focused on machine unlearning, which enables models to efficiently eliminate the memory of deleted data, and continual learning, which aims to address catastrophic forgetting in models trained sequentially on multiple tasks. The direction of the field is shifting towards developing more innovative and effective methods for machine unlearning and continual learning, such as approximate unlearning algorithms, feature learning, and synthetic boundary experience blending. Noteworthy papers in this area include 'The Right to be Forgotten in Pruning: Unveil Machine Unlearning on Sparse Models', which proposes an un-pruning algorithm to eliminate the impact of deleted data on model pruning, and 'Continual Learning with Synthetic Boundary Experience Blending', which introduces a novel training framework that integrates knowledge from both stored key samples and synthetic, boundary-adjacent data.

Sources

The Right to be Forgotten in Pruning: Unveil Machine Unlearning on Sparse Models

Feature learning is decoupled from generalization in high capacity neural networks

A Survey on Generative Model Unlearning: Fundamentals, Taxonomy, Evaluation, and Future Direction

What Can Grokking Teach Us About Learning Under Nonstationarity?

Do Not Mimic My Voice: Speaker Identity Unlearning for Zero-Shot Text-to-Speech

Unlearning of Knowledge Graph Embedding via Preference Optimization

Reminiscence Attack on Residuals: Exploiting Approximate Machine Unlearning for Privacy

Task-Focused Consolidation with Spaced Recall: Making Neural Networks learn like college students

Zero-Shot Machine Unlearning with Proxy Adversarial Data Generation

ZIUM: Zero-Shot Intent-Aware Adversarial Attack on Unlearned Models

Quantum-Inspired Audio Unlearning: Towards Privacy-Preserving Voice Biometrics

LoReUn: Data Itself Implicitly Provides Cues to Improve Machine Unlearning

Efficient Machine Unlearning via Influence Approximation

Continual Learning with Synthetic Boundary Experience Blending

Built with on top of