The field of machine learning is witnessing significant developments in uncertainty quantification and conformal prediction. Researchers are actively exploring innovative methods to improve the reliability and interpretability of machine learning models. A key direction in this area is the integration of conformal prediction with various techniques, such as cooperative games, zonotope-based uncertainty quantification, and hierarchical classification. These approaches aim to address limitations in existing methods, including overfitting, class imbalance, and the lack of representation in hypothesis spaces. Notably, the use of conformal prediction is being extended to long-tail classification, hierarchical multi-label classification, and exceptional model mining, demonstrating its potential to enhance model performance and reliability.
Noteworthy papers in this area include: Conformalized Exceptional Model Mining, which introduces a framework that combines conformal prediction with exceptional model mining to identify cohesive subgroups where model performance deviates exceptionally. Zono-Conformal Prediction, which proposes a novel approach to uncertainty quantification using zonotope-based prediction sets, offering improved coverage guarantees and reduced conservatism. Tail-Aware Conformal Prediction, which addresses the issue of imbalanced coverage in long-tail classification by utilizing the long-tail structure to mitigate under coverage of tail classes. Hierarchical Conformal Classification, which extends conformal prediction to incorporate class hierarchies, providing more informative and reliable prediction sets. Hierarchy-Consistent Learning and Adaptive Loss Balancing, which proposes a classifier that maintains structural consistency and balances loss weighting in hierarchical multi-label classification, achieving higher classification accuracy and reduced hierarchical violation rates.