The field of machine learning is moving towards more efficient and effective methods for model selection, hyperparameter tuning, and annotator disagreement resolution. Recent work has highlighted the importance of accounting for annotator competence and systematic disagreements when training on human-labeled data. Additionally, there is a growing trend towards using Bayesian optimization and adaptive successive filtering to speed up automatic machine learning. Noteworthy papers include: NUTMEG, which introduces a new Bayesian model for separating signal from noise in annotator disagreement, and BOASF, which proposes a unified framework for speeding up automatic machine learning via adaptive successive filtering. CODA is also notable for its consensus-driven active model selection method, which reduces the annotation effort required to discover the best model by upwards of 70% compared to the previous state-of-the-art.