The field of machine learning is moving towards developing more robust methods for learning from noisy labels. Recent research has focused on improving the reliability of active learning, detecting noisy labels, and developing more effective methods for learning from noisy data. One key direction is the use of geometric and implicit bias techniques to improve the robustness of deep learning models. Another important area of research is the development of more effective methods for membership inference attacks and open-set domain generalization under noisy labels. Notable papers in this area include Reliable Active Learning via Neural Collapse Geometry, which proposes a framework for reliable active learning from unreliable labels, and ImpMIA, which introduces a novel membership inference attack that exploits the implicit bias of neural networks. Additionally, EReLiFM proposes a residual flow meta-learning approach for open-set domain generalization under noisy labels, and SHAPOOL presents a novel shadow pool training framework for efficient inference attacks.