The field of machine learning is moving towards developing methods that can effectively handle imperfect data, such as interval targets, partial labels, and positive-unlabeled data. Researchers are exploring new loss functions, probabilistic approaches, and optimization techniques to improve the performance of models in these scenarios. Notably, the use of amortized variational inference and min-max learning formulations are showing promising results.
Some noteworthy papers in this area include:
- Amortized Variational Inference for Partial-Label Learning, which introduces a novel probabilistic framework for label disambiguation.
- Learning from Interval Targets, which proposes a min-max learning formulation for regression with interval targets.
- Cost-Sensitive Unbiased Risk Estimation for Multi-Class Positive-Unlabeled Learning, which presents a cost-sensitive multi-class PU method based on adaptive loss weighting.