The field of machine learning is moving towards more nuanced evaluation metrics and improved methods for weakly supervised learning. Researchers are developing new metrics that can handle subjective or fuzzy class boundaries, and proposing methods to detect and rectify noisy labels in datasets. There is also a growing interest in positive-unlabeled learning, with a focus on creating fair and realistic evaluation benchmarks and developing new algorithms that can learn from limited positive data and abundant unlabeled data. Noteworthy papers include:
- Semantic F1 Scores, which introduces a novel evaluation metric that quantifies semantic relatedness between predicted and gold labels, providing fairer evaluations in domains with human disagreement or fuzzy category boundaries.
- Noisy-Pair Robust Representation Alignment for Positive-Unlabeled Learning, which proposes a non-contrastive PU learning framework that achieves substantial improvements over state-of-the-art PU methods across diverse datasets.