The field of time series analysis and causal discovery is rapidly evolving, with a focus on developing innovative methods to capture complex nonlinear dependencies and spurious correlations. Recent research has explored the use of Transformer-based architectures, such as multi-layer time-series forecasters and attention-inspired gated Mixture-of-Experts, to improve forecasting accuracy and efficiency. Additionally, there is a growing interest in integrating prior knowledge and causal discovery into beam management pipelines to enhance reliability and interpretability. Noteworthy papers in this area include Transforming Causality, which introduces a novel framework for temporal causal discovery and inference, and Causal Beam Selection, which proposes a causally-aware deep learning framework for beam management. Other notable works, such as GateTS and FreezeTST, have demonstrated the potential of simplified training processes and parameter-efficient designs for univariate time series forecasting. The development of open-source libraries, like WHAR datasets and pyFAST, is also facilitating more efficient and reproducible research in this field. Overall, these advances are paving the way for more accurate, reliable, and interpretable time series analysis and causal discovery methods.
Advances in Time Series Analysis and Causal Discovery
Sources
Transforming Causality: Transformer-Based Temporal Causal Discovery with Prior Knowledge Integration
Enhancing Transformer-Based Foundation Models for Time Series Forecasting via Bagging, Boosting and Statistical Ensembles
Frozen in Time: Parameter-Efficient Time Series Transformers via Reservoir-Induced Feature Expansion and Fixed Random Dynamics
HierCVAE: Hierarchical Attention-Driven Conditional Variational Autoencoders for Multi-Scale Temporal Modeling
Compositionality in Time Series: A Proof of Concept using Symbolic Dynamics and Compositional Data Augmentation