Interpretable Modeling of Complex Systems

The field of complex systems modeling is shifting towards a focus on interpretable and explainable methods. Recent research has highlighted the importance of understanding the underlying mechanisms and relationships between variables in complex systems, rather than just relying on black-box models. This is being achieved through the development of innovative frameworks and architectures that can provide insights into the decision-making process of models.

Notable papers in this area include: TrendGNN, which proposes a graph-based forecasting framework for interpretable analysis of epidemic signals and behaviors. A Self-explainable Model of Long Time Series, which introduces the EXCAP framework for extracting informative structured causal patterns in long time series data. TimePred, which presents an efficient and interpretable offline change point detection framework for high-volume data. When, How Long and How Much, which proposes the MAGNETS architecture for inherently interpretable neural networks in time series regression.

Sources

TrendGNN: Towards Understanding of Epidemics, Beliefs, and Behaviors

A Self-explainable Model of Long Time Series by Extracting Informative Structured Causal Patterns

TimePred: efficient and interpretable offline change point detection for high volume data - with application to industrial process monitoring

When, How Long and How Much? Interpretable Neural Networks for Time Series Regression by Learning to Mask and Aggregate

Artificial Intelligence Applications in Horizon Scanning for Infectious Diseases

Built with on top of