Advancements in Tensor Decompositions and Multi-Task Learning

The field of numerical simulation and data analysis is moving towards more efficient and accurate methods for dimensionality reduction and multi-task learning. Recent developments have focused on incorporating quantities of interest and invariants associated with conservation principles into low-dimensional models, enabling more accurate analysis of simulation data without requiring access to the full set of high-dimensional data. Additionally, there is a growing interest in using tensor decompositions and low-rank tensor representations to process multi-dimensional data, particularly in scenarios with significant spatial variations. Noteworthy papers in this area include: AutoScale, which introduces a simple yet effective framework for linear scalarization guided by multi-task optimization metrics. Superpixel-informed Continuous Low-Rank Tensor Representation, which proposes a novel framework for continuous and flexible modeling of multi-dimensional data beyond traditional grid-based constraints.

Sources

Goal-Oriented Low-Rank Tensor Decompositions for Numerical Simulation Data

Superpixel-informed Continuous Low-Rank Tensor Representation for Multi-Dimensional Data Recovery

AutoScale: Linear Scalarization Guided by Multi-Task Optimization Metrics

Topology-Aware Volume Fusion for Spectral Computed Tomography via Histograms and Extremum Graph

Tensorized Multi-Task Learning for Personalized Modeling of Heterogeneous Individuals with High-Dimensional Data

Built with on top of