The field of time series forecasting is witnessing significant developments with the integration of large language models (LLMs). Researchers are exploring the potential of LLMs to improve forecasting accuracy and robustness. One notable direction is the use of non-causal, bidirectional attention encoder-only transformers, which have shown state-of-the-art performance in certain tasks. Additionally, there is a growing interest in multimodal time series forecasting, where LLMs are used to incorporate visual data, such as satellite imagery, into forecasting models. Another area of research focuses on developing more efficient and scalable time series foundation models, which can be fine-tuned for specific downstream tasks. Noteworthy papers include Output Scaling: YingLong-Delayed Chain of Thought, which presents a joint forecasting framework for time series prediction, and Large Language models for Time Series Analysis: Techniques, Applications, and Challenges, which provides a systematic review of pre-trained LLM-driven time series analysis.
Advances in Time Series Forecasting with Large Language Models
Sources
Output Scaling: YingLong-Delayed Chain of Thought in a Large Pretrained Time Series Forecasting Model
HAELT: A Hybrid Attentive Ensemble Learning Transformer Framework for High-Frequency Stock Price Forecasting