Advances in Graph Neural Networks and Explainability

The field of graph neural networks (GNNs) is rapidly evolving, with a focus on improving node classification, explainability, and scalability. Recent developments have introduced novel frameworks that integrate large language models (LLMs) with GNNs to generate semantically rich explanations and improve classification performance. Additionally, there is a growing interest in developing methods for explaining GNN predictions, including post-hoc explanation methods and self-explaining GNN frameworks. These advances have the potential to enhance the trustworthiness and interpretability of GNNs in high-stakes applications. Noteworthy papers include An Effective Approach for Node Classification in Textual Graphs, which proposes a novel framework that integrates TAPE with Graphormer to achieve state-of-the-art performance on the ogbn-arxiv dataset. Another notable paper is X-Node, which introduces a self-explaining GNN framework that generates faithful, per-node explanations. Furthermore, the paper From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context proposes a lightweight, post-hoc framework that uses LLMs to generate faithful and interpretable explanations for GNN predictions.

Sources

An Effective Approach for Node Classification in Textual Graphs

Semi-Supervised Supply Chain Fraud Detection with Unsupervised Pre-Filtering

From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context

RNA-KG v2.0: An RNA-centered Knowledge Graph with Properties

When Is Prior Knowledge Helpful? Exploring the Evaluation and Selection of Unsupervised Pretext Tasks from a Neuro-Symbolic Perspective

Discrete Diffusion-Based Model-Level Explanation of Heterogeneous GNNs with Node Features

Eat your own KR: a KR-based approach to index Semantic Web Endpoints and Knowledge Graphs

Differentiated Information Mining: A Semi-supervised Learning Framework for GNNs

GRainsaCK: a Comprehensive Software Library for Benchmarking Explanations of Link Prediction Tasks on Knowledge Graphs

Can LLM-Generated Textual Explanations Enhance Model Classification Performance? An Empirical Study

X-Node: Self-Explanation is All We Need

Efficient Patent Searching Using Graph Transformers

Built with on top of