Advancements in Numerical Methods for Optimization and Linear Algebra

The field of numerical methods for optimization and linear algebra is witnessing significant developments, with a focus on improving the efficiency and accuracy of algorithms for solving large-scale problems. Researchers are exploring new techniques, such as derivative-free methods, extended-Krylov-subspace methods, and randomized sketch techniques, to tackle complex optimization and linear algebra problems. These innovative approaches are enabling the solution of problems that were previously intractable, and are being applied to a wide range of fields, including scientific computing, machine learning, and data analysis. Notable papers in this area include:

  • The paper on Extended-Krylov-subspace methods for trust-region and norm-regularization subproblems, which presents an effective new method for solving trust-region and norm-regularization problems.
  • The paper on A CUR Krylov Solver for Large-Scale Linear Matrix Equations, which introduces a methodology leveraging CUR decomposition for solving large-scale generalized Sylvester and non-Sylvester multi-term equations.

Sources

Two Generalized Derivative-free Methods to Solve Large Scale Nonlinear Equations with Convex Constraints

Extended-Krylov-subspace methods for trust-region and norm-regularization subproblems

Accelerated Kaczmarz methods via randomized sketch techniques for solving consistent linear systems

Block Structure Preserving Model Order Reduction for A-EFIE Integral Equation Method

Quadrature for Singular Integrals over convex Polytopes

A CUR Krylov Solver for Large-Scale Linear Matrix Equations

Tighter Bounds for the Randomized Polynomial-Time Simplex Algorithm for Linear Programming

nlKrylov: A Unified Framework for Nonlinear GCR-type Krylov Subspace Methods

Data-driven Model Reduction for Parameter-Dependent Matrix Equations via Operator Inference

Entrywise Approximate Solutions for SDDM Systems in Almost-Linear Time

Built with on top of