The field of machine unlearning is rapidly evolving, with a focus on developing efficient and effective methods for removing the influence of specific data from trained models. Recent research has explored the intersection of machine unlearning and membership inference attacks, highlighting the need for careful consideration of privacy risks. Another area of focus is the development of benchmarks and evaluation metrics for unlearning in recommender systems, which is crucial for real-world applications. Additionally, there have been advancements in parameter-efficient unlearning methods, including module-aware approaches for Transformers.
Noteworthy papers include: Evaluating the Defense Potential of Machine Unlearning against Membership Inference Attacks, which assesses the vulnerability of models to membership inference attacks after applying machine unlearning algorithms. Towards a Real-World Aligned Benchmark for Unlearning in Recommender Systems, which proposes a set of design desiderata and research questions to guide the development of a more realistic benchmark for unlearning in recommender systems. Module-Aware Parameter-Efficient Machine Unlearning on Transformers, which proposes a module-aware parameter-efficient machine unlearning approach that uses a learnable pair of masks to pinpoint influence-critical parameters in the heads and filters of Transformers. Data Augmentation Improves Machine Unlearning, which investigates the impact of different data augmentation strategies on the performance of unlearning methods. Auditing Approximate Machine Unlearning for Differentially Private Models, which proposes a holistic approach to auditing both unlearned and retained samples' privacy risks after applying approximate unlearning algorithms. MobText-SISA: Efficient Machine Unlearning for Mobility Logs with Spatio-Temporal and Natural-Language Data, which introduces a scalable machine-unlearning framework that extends Sharded, Isolated, Sliced, and Aggregated (SISA) training to heterogeneous spatio-temporal data. Towards Mitigating Excessive Forgetting in LLM Unlearning via Entanglement-Aware Unlearning with Proxy Constraint, which proposes a novel unlearning framework that addresses the limitations of existing methods through entanglement-awareness guided loss reweighting and a proxy constraint.