Efficient Algorithms for Large-Scale Matrix Computations

The field of large-scale matrix computations is moving towards the development of efficient and pass-efficient algorithms that can handle massive matrices with limited memory and computational resources. Researchers are focusing on designing randomized algorithms that can provide accurate approximations of matrix operations, such as low-rank approximation and eigenvector computation, while minimizing the number of passes over the input matrix. Another area of research is the development of preconditioners for indefinite least squares problems, which can accelerate the convergence of iterative methods. Noteworthy papers in this area include: On Subsample Size of Quantile-Based Randomized Kaczmarz, which analyzes the subsample size required for quantile-based randomized Kaczmarz methods to achieve linear convergence. Fast One-Pass Sparse Approximation of the Top Eigenvectors of Huge Low-Rank Matrices, which presents a one-pass algorithm for sparse approximation of top eigenvectors of huge low-rank matrices.

Sources

Pass-efficient Randomized Algorithms for Low-rank Approximation of Quaternion Matrices

On Subsample Size of Quantile-Based Randomized Kaczmarz

On finite precision block Lanczos computations

A parameterized block-splitting preconditioner for indefinite least squares problem

Fast One-Pass Sparse Approximation of the Top Eigenvectors of Huge Low-Rank Matrices? Yes, $MAM^*$!

Solution of Least Squares Problems with Randomized Preconditioned Normal Equations

Built with on top of