The field of optimization and inference is witnessing significant developments, with a focus on improving convergence rates and guarantees for various algorithms. Researchers are exploring new techniques, such as momentum-based methods and non-Euclidean projections, to enhance the performance of stochastic optimization algorithms. Additionally, there is a growing interest in analyzing the behavior of gradient descent in different regimes, including the edge of stability. Theoretical understanding of variational inference is also being advanced, with new results on relative smoothness and convergence guarantees. Notably, some papers are making significant contributions to the field, including: Stochastic Difference-of-Convex Optimization with Momentum, which shows that momentum enables convergence under standard smoothness and bounded variance assumptions. Natural Gradient VI: Guarantees for Non-Conjugate Models, which derives sufficient conditions for relative smoothness and proposes a modified algorithm with non-Euclidean projections. Statistical Inference for Linear Functionals of Online Least-squares SGD, which establishes non-asymptotic Berry--Esseen bounds for linear functionals of online least-squares SGD.