The field of time series forecasting is moving towards more efficient and effective solutions, with a focus on leveraging cross-modal dependencies, incorporating domain knowledge, and developing novel architectures. Recent advancements have shown that combining different modalities, such as visual and time series data, can lead to improved forecasting performance. Additionally, the use of transformers and other deep learning models has become increasingly popular, with innovations such as attention modulation and hybrid temporal and multivariate embeddings. Noteworthy papers include VIFO, which proposes a cross-modal forecasting model that renders multivariate time series into images, and PhaseFormer, which introduces a phase perspective for modeling periodicity and achieves state-of-the-art performance with a lightweight routing mechanism. Other notable papers include TimeFormer, which develops a novel Transformer architecture designed for time series data, and HTMformer, which combines hybrid temporal and multivariate embeddings with a Transformer architecture to build a lightweight forecaster.
Time Series Forecasting Developments
Sources
Lightweight and Data-Efficient MultivariateTime Series Forecasting using Residual-Stacked Gaussian (RS-GLinear) Architecture
Benchmarking M-LTSF: Frequency and Noise-Based Evaluation of Multivariate Long Time Series Forecasting Models
ATLO-ML: Adaptive Time-Length Optimizer for Machine Learning -- Insights from Air Quality Forecasting