The field of complex systems modeling is shifting towards a focus on interpretable and explainable methods. Recent research has highlighted the importance of understanding the underlying mechanisms and relationships between variables in complex systems, rather than just relying on black-box models. This is being achieved through the development of innovative frameworks and architectures that can provide insights into the decision-making process of models.
Notable papers in this area include: TrendGNN, which proposes a graph-based forecasting framework for interpretable analysis of epidemic signals and behaviors. A Self-explainable Model of Long Time Series, which introduces the EXCAP framework for extracting informative structured causal patterns in long time series data. TimePred, which presents an efficient and interpretable offline change point detection framework for high-volume data. When, How Long and How Much, which proposes the MAGNETS architecture for inherently interpretable neural networks in time series regression.