The field of machine learning is moving towards developing more efficient and interpretable models. Recent research has focused on improving active learning methods, which enable models to selectively request labels for unlabeled data, reducing the need for large amounts of labeled data. Additionally, there has been a surge in developing methods for estimating the influence of individual training examples on model behavior, which is crucial for model debugging and data curation. Noteworthy papers in this area include the introduction of Partial Batch Label Sampling for efficient active learning and the development of f-INE, a hypothesis testing framework for estimating influence under training randomness. Furthermore, research on budget-constrained active learning and variable importance methods has shown promising results in improving model performance and robustness. Overall, these advances have the potential to significantly improve the efficiency and reliability of machine learning models.