The field of machine learning is moving towards a greater emphasis on robustness and unlearning, with a focus on developing methods that can withstand adversarial attacks and forget specific training samples. Recent research has explored the adversarial robustness of nonparametric regression, proposing new methods and establishing fundamental limits on estimation error. Additionally, there has been significant progress in machine unlearning, with the development of new algorithms and frameworks that can effectively remove the influence of corrupted data. These advances have the potential to improve the reliability and trustworthiness of machine learning models. Noteworthy papers include:
- Adversarial Robustness of Nonparametric Regression, which establishes a minimax lower bound on estimation error and shows that the classical smoothing spline estimator can exhibit robustness against adversarial corruption.
- Redirection for Erasing Memory (REM), which proposes a novel method for universal unlearning that performs strongly across a wide range of tasks.
- From Dormant to Deleted: Tamper-Resistant Unlearning Through Weight-Space Regularization, which proposes a new class of methods that achieve state-of-the-art resistance to relearning attacks.
- Machine Unlearning under Overparameterization, which defines a new unlearning solution and proposes a new algorithmic framework that outperforms existing baselines.