The field of tensor decomposition and low-rank approximation is moving towards the development of more efficient and scalable algorithms for large and sparse tensors. Researchers are exploring new methods for tensor completion, decomposition, and approximation, with a focus on reducing computational cost and improving accuracy. One notable trend is the use of manifold optimization and discrete empirical interpolation methods to improve the efficiency and accuracy of dynamical low-rank approximation. Another area of research is the development of data-adaptive tensor low-rank representation models that can effectively capture global and local correlations in tensor data. Noteworthy papers in this area include:
- HOQRI, which proposes a new algorithm for low multilinear rank approximation of large and sparse tensors, eliminating intermediate memory explosion and guaranteeing convergence to the set of stationary points.
- Efficient Tensor Completion Algorithms for Highly Oscillatory Operators, which presents low-complexity tensor completion algorithms for reconstructing highly oscillatory operators with improved efficiency and accuracy.
- Interpolatory Dynamical Low-Rank Approximation, which introduces a new class of projected integrators that combines explicit Runge--Kutta methods with DEIM-based projections, achieving significant reductions in computational cost.
- Data-Adaptive Transformed Bilateral Tensor Low-Rank Representation for Clustering, which proposes a novel transformed bilateral tensor low-rank representation model that integrates a data-adaptive tensor nuclear norm and bilateral structure, achieving superior clustering performance.