The field of machine learning is moving towards developing more robust and generalizable models, particularly in applications with multiple heterogeneous data generating sources. Researchers are exploring novel frameworks that account for distributional uncertainty within each group, while preserving the objective of improving the worst-group performance. Ensemble learning is also being revisited, with a focus on incorporating margin variance into the loss function to enhance robustness and improve generalization performance. Furthermore, dataset condensation is being reexamined through a unified framework that encompasses existing methods and extends the task-specific notion to a more general definition. Noteworthy papers include: Group Distributionally Robust Machine Learning under Group Level Distributional Uncertainty, which proposes a novel framework using Wasserstein-based distributionally robust optimization. Hadamard-Riemannian Optimization for Margin-Variance Ensemble, which introduces a novel ensemble learning framework that explicitly incorporates margin variance into the loss function.