The field of time series forecasting is witnessing significant advancements with the integration of multi-modal views, large vision models, and slow-thinking language models. Researchers are exploring the potential of these innovative approaches to improve forecasting accuracy and interpretability. Notably, the use of binary cumulative encoding and retrieval-augmented time series foundation models has shown promising results. Additionally, the development of new attention mechanisms, such as XicorAttention, and calibration strategies like Socket+Plug, is enhancing the performance of existing models. Meanwhile, studies on the effectiveness of ensembling and zero-shot forecasting are providing valuable insights into the trade-offs between accuracy and computational cost.
Noteworthy papers include:
- Multi-Modal View Enhanced Large Vision Models for Long-Term Time Series Forecasting, which proposes a novel decomposition-based multi-modal view framework for long-term time series forecasting.
- Can Slow-thinking LLMs Reason Over Time, which investigates the potential of slow-thinking language models for time series forecasting and finds that they exhibit non-trivial zero-shot forecasting capabilities.
- Binary Cumulative Encoding meets Time Series Forecasting, which introduces binary cumulative encoding to represent scalar targets as monotonic binary vectors and achieves state-of-the-art results on several benchmark datasets.