The field of graph neural networks (GNNs) is rapidly advancing with a focus on improving model explainability, calibration, and scalability. Recent developments have introduced novel architectures and techniques to address challenges such as noise and incompleteness in graph data, as well as the need for more accurate and reliable predictions. Notable advancements include the use of co-augmentation of topology and attributes, distributed Shapley values for explainability, and wavelet-aware temperature scaling for calibration. These innovations have the potential to significantly impact various applications, including node classification, graph anomaly detection, and time series analysis.
Noteworthy papers in this area include: GKNet, which proposes a graph-aware state space model for graph time series and achieves state-of-the-art results in prediction and imputation tasks. CoATA, which presents a dual-channel GNN framework for co-augmentation of topology and attributes and outperforms existing methods on several benchmark datasets. DistShap, which introduces a parallel algorithm for scalable GNN explanations and achieves high accuracy and efficiency on large-scale graphs. Reconciling Attribute and Structural Anomalies for Improved Graph Anomaly Detection, which proposes a mutual distillation-based triple-channel graph anomaly detection framework and demonstrates its effectiveness on various datasets. Calibrating Graph Neural Networks with Wavelet-Aware Temperature Scaling, which presents a post-hoc calibration framework that achieves state-of-the-art calibration performance on several benchmark datasets.