Advances in Time Series Forecasting

The field of time series forecasting is moving towards leveraging pre-trained language models and multimodal integration to improve forecasting accuracy. Researchers are exploring the effective transfer of knowledge from language models to time series forecasting, and investigating the conditions under which multimodal input yields gains. There is also a growing interest in developing frameworks that can adapt to general covariate-aware forecasting tasks, and in proposing novel sparse autoregression frameworks for periodicity quantification. Noteworthy papers include 'Random Initialization Can't Catch Up' which analyzes the advantage of language model transfer for time series forecasting, and 'Teaching Time Series to See and Speak' which proposes a multimodal contrastive learning framework for forecasting with aligned visual and textual perspectives. Additionally, 'ST-MTM: Masked Time Series Modeling with Seasonal-Trend Decomposition' presents a novel masking method for seasonal-trend components, and 'Variational Digital Twins' introduces a framework that augments standard neural architectures with a single Bayesian output layer for real-time insights into complex energy assets.

Sources

Random Initialization Can't Catch Up: The Advantage of Language Model Transfer for Time Series Forecasting

Does Multimodality Lead to Better Time Series Forecasting?

UniCA: Adapting Time Series Foundation Model to General Covariate-Aware Forecasting

Interpretable Time Series Autoregression for Periodicity Quantification

Accurate Parameter-Efficient Test-Time Adaptation for Time Series Forecasting

Teaching Time Series to See and Speak: Forecasting with Aligned Visual and Textual Perspectives

ST-MTM: Masked Time Series Modeling with Seasonal-Trend Decomposition for Time Series Forecasting

Variational Digital Twins

Built with on top of