Explainable AI in Time Series Forecasting

The field of time series forecasting is moving towards increased transparency and interpretability, with a focus on explaining the reasoning behind model predictions. This shift is driven by the need to understand and trust the outputs of complex models, particularly in high-stakes applications. Recent developments have introduced innovative methods for making deep learning models more interpretable, including the use of post-hoc explainability techniques and model-agnostic algorithms. These advancements have the potential to increase the adoption of machine learning-based approaches in risk assessment studies and disaster planning. Notable papers in this area include:

  • A paper that presents an approach to making a deep learning-based solar storm prediction model interpretable, leveraging post-hoc model-agnostic techniques to elucidate the factors contributing to the predicted output.
  • A paper that proposes a model-agnostic post-hoc algorithm to explain time series forecasting models and their forecasts, resulting in multi-granular explanations and characterizing cross-channel correlations for multivariate time series forecasts.
  • A paper that develops a transparent machine learning architecture using the HazBinLoss function, addressing data imbalances and providing exact contribution of each input term.
  • A paper that combines traditional explainable AI methods with Rating Driven Explanations to assess time-series forecasting model performance and interpretability across diverse domains and use cases.

Sources

Explainable AI in Deep Learning-Based Prediction of Solar Storms

PAX-TS: Model-agnostic multi-granular explanations for time series forecasting via localized perturbations

Breaking the Black Box: Inherently Interpretable Physics-Informed Machine Learning for Imbalanced Seismic Data

On Identifying Why and When Foundation Models Perform Well on Time-Series Forecasting Using Automated Explanations and Rating

Built with on top of