The field of AI safety is moving towards developing more reliable and safe machine learning models, particularly in open-world deployment. Researchers are focusing on addressing key reliability issues arising from distributional uncertainty and unknown classes. Novel frameworks are being introduced to jointly optimize for in-distribution accuracy and reliability to unseen data, enabling models to recognize and handle novel inputs without labeled out-of-distribution data. Noteworthy papers in this area include: The paper Foundations of Unknown-aware Machine Learning, which introduces an unknown-aware learning framework and proposes new outlier synthesis methods to generate informative unknowns during training. The paper The Achilles Heel of AI: Fundamentals of Risk-Aware Training Data for High-Consequence Models, which introduces smart-sizing, a training data strategy that emphasizes label diversity and model-guided selection. The paper Why Can Accurate Models Be Learned from Inaccurate Annotations?, which investigates the phenomenon of models learning from inaccurate annotations and proposes a lightweight plug-in to help classifiers retain principal subspace information while mitigating noise induced by label inaccuracy.