The field of private learning and optimization is moving towards developing more efficient and effective algorithms that can handle complex datasets and scenarios. Recent research has focused on improving the accuracy and robustness of differentially private stochastic gradient descent (DP-SGD) and adaptive optimizers. Notably, innovations in learning rate scheduling, feature learning, and population size reduction have led to significant advancements in the field. Some papers are particularly noteworthy, including one that proposes a learning-rate-aware factorization to improve accuracy in private training, and another that presents a theoretical framework to analyze private training through a feature learning perspective. Additionally, a new variant of Differential Evolution (DE) has been proposed, which demonstrates top-tier performance across multiple benchmark suites. Other notable papers include one that introduces a novel optimizer with continuously tunable adaptivity, and another that proposes a memory-efficient and sparsity-aware adaptive DP optimizer. These advancements have the potential to significantly impact the field of private learning and optimization, enabling more efficient and effective training of machine learning models.