The field of numerical linear algebra and optimization is witnessing significant advancements, driven by the development of innovative algorithms and techniques. A key direction is the integration of randomized methods with traditional eigensolvers, enabling faster and more accurate computations of partial eigendecompositions. This hybrid approach is particularly useful for large-scale matrices, where it can achieve substantial computational speedups while maintaining high accuracy. Another area of progress is the design of efficient algorithms for empirical risk minimization problems, which can be solved to high accuracy in nearly-linear time. Furthermore, the development of new data structures and techniques, such as adaptive matrix sparsification and implicit regularization, is enhancing the performance of various optimization methods. Noteworthy papers include: Randomized-Accelerated FEAST, which presents a hybrid algorithm for efficiently computing partial eigendecompositions of large-scale matrices. Adaptive Matrix Sparsification and Applications to Empirical Risk Minimization, which gives an algorithm to solve empirical risk minimization problems to high accuracy in nearly-linear time. PIBNet, a learning-based approach for simulating multiple scattering problems, which leverages a physics-inspired graph-based strategy to model obstacles and their long-range interactions efficiently. Tuning-Free Structured Sparse Recovery of Multiple Measurement Vectors using Implicit Regularization, which introduces a novel tuning-free framework that leverages Implicit Regularization to overcome the limitation of requiring careful parameter tuning or prior knowledge of the sparsity of the signal and/or noise variance.