The field of time series forecasting is moving towards increased transparency and interpretability, with a focus on explaining the reasoning behind model predictions. This shift is driven by the need to understand and trust the outputs of complex models, particularly in high-stakes applications. Recent developments have introduced innovative methods for making deep learning models more interpretable, including the use of post-hoc explainability techniques and model-agnostic algorithms. These advancements have the potential to increase the adoption of machine learning-based approaches in risk assessment studies and disaster planning. Notable papers in this area include:
- A paper that presents an approach to making a deep learning-based solar storm prediction model interpretable, leveraging post-hoc model-agnostic techniques to elucidate the factors contributing to the predicted output.
- A paper that proposes a model-agnostic post-hoc algorithm to explain time series forecasting models and their forecasts, resulting in multi-granular explanations and characterizing cross-channel correlations for multivariate time series forecasts.
- A paper that develops a transparent machine learning architecture using the HazBinLoss function, addressing data imbalances and providing exact contribution of each input term.
- A paper that combines traditional explainable AI methods with Rating Driven Explanations to assess time-series forecasting model performance and interpretability across diverse domains and use cases.