Advances in Robust Learning under Noisy Labels

The field of machine learning is moving towards developing more robust methods for learning from noisy labels. Recent research has focused on improving the reliability of active learning, detecting noisy labels, and developing more effective methods for learning from noisy data. One key direction is the use of geometric and implicit bias techniques to improve the robustness of deep learning models. Another important area of research is the development of more effective methods for membership inference attacks and open-set domain generalization under noisy labels. Notable papers in this area include Reliable Active Learning via Neural Collapse Geometry, which proposes a framework for reliable active learning from unreliable labels, and ImpMIA, which introduces a novel membership inference attack that exploits the implicit bias of neural networks. Additionally, EReLiFM proposes a residual flow meta-learning approach for open-set domain generalization under noisy labels, and SHAPOOL presents a novel shadow pool training framework for efficient inference attacks.

Sources

Reliable Active Learning from Unreliable Labels via Neural Collapse Geometry

Weed Out, Then Harvest: Dual Low-Rank Adaptation is an Effective Noisy Label Detector for Noise-Robust Learning

ImpMIA: Leveraging Implicit Bias for Membership Inference Attack under Realistic Scenarios

Revisiting Meta-Learning with Noisy Labels: Reweighting Dynamics and Theoretical Guarantees

EReLiFM: Evidential Reliability-Aware Residual Flow Meta-Learning for Open-Set Domain Generalization under Noisy Labels

Toward Efficient Inference Attacks: Shadow Model Sharing via Mixture-of-Experts

Built with on top of