The field of machine learning is moving towards addressing the challenges of class imbalance and label shift, which are prevalent in real-world applications. Researchers are exploring innovative methods to improve the reliability and calibration of classification models under extreme class imbalance, where recall and calibration are critical. Recent developments have focused on proposing mathematically motivated frameworks that can synthesize and filter minority class data, as well as dynamic methods for estimating label shift and adapting to streaming data. Noteworthy papers in this area include: Boundary-Aware Adversarial Filtering for Reliable Diagnosis under Extreme Class Imbalance, which proposes a novel filtering framework that improves recall and calibration. Bayesian-based Online Label Shift Estimation with Dynamic Dirichlet Priors, which introduces a Bayesian framework for accurate label shift estimation and enhances classification accuracy. Sampling Control for Imbalanced Calibration in Semi-Supervised Learning, which proposes a unified framework that suppresses model bias through decoupled sampling control. DiCaP: Distribution-Calibrated Pseudo-labeling for Semi-Supervised Multi-Label Learning, which theoretically verifies the importance of correctness likelihood in pseudo-labeling and proposes a correctness-aware framework.