The field of tensor-based methods and dimensionality reduction is witnessing significant developments, driven by the need to efficiently process and analyze high-dimensional data. Researchers are exploring innovative approaches to improve the performance and interpretability of tensor decompositions, including the use of adaptive regularization mechanisms and intuitive example-driven methods to understand tensor ranks. Furthermore, there is a growing interest in developing robust and computationally efficient frameworks for non-negative matrix and tensor factorization, as well as dataset-adaptive dimensionality reduction techniques. These advancements have the potential to enhance the accuracy and efficiency of various applications, including hyperspectral image classification, data analysis, and visualization. Noteworthy papers in this area include:
- SDTN and TRN, which propose a self-adaptive tensor-regularized network for hyperspectral image classification, achieving significant improvements in accuracy and reduced model parameters.
- The Target Polish, which introduces a robust and computationally efficient framework for nonnegative matrix and tensor factorization, demonstrating outlier resistance and reduced computational time.