Advances in Graph Representation Learning and Domain Adaptation

The field of graph learning is moving towards developing more robust and generalizable models that can handle out-of-distribution data and adapt to new domains. Recent research has focused on designing attention mechanisms and positional and structural encodings that can capture invariant relationships between graph structures and labels. Additionally, there is a growing interest in using graph properties to train foundation models that can generalize across different domains. Noteworthy papers include:

  • Invariant Graph Transformer for Out-of-Distribution Generalization, which introduces a novel transformer model that can learn generalized graph representations.
  • Nested Graph Pseudo-Label Refinement for Noisy Label Domain Adaptation Learning, which proposes a framework for graph-level domain adaptation with noisy labels.
  • DP-DGAD: A Generalist Dynamic Graph Anomaly Detector with Dynamic Prototypes, which presents a dynamic graph anomaly detection model that can capture evolving anomalies in dynamic graphs.
  • GraphProp: Training the Graph Foundation Models using Graph Properties, which introduces a method for training graph foundation models using graph properties.
  • SPA++: Generalized Graph Spectral Alignment for Versatile Domain Adaptation, which proposes a generalized graph spectral alignment framework for domain adaptation.

Sources

Invariant Graph Transformer for Out-of-Distribution Generalization

Nested Graph Pseudo-Label Refinement for Noisy Label Domain Adaptation Learning

DP-DGAD: A Generalist Dynamic Graph Anomaly Detector with Dynamic Prototypes

Graph Representation Learning with Massive Unlabeled Data for Rumor Detection

GraphProp: Training the Graph Foundation Models using Graph Properties

A Scalable Pretraining Framework for Link Prediction with Efficient Adaptation

SPA++: Generalized Graph Spectral Alignment for Versatile Domain Adaptation

Built with on top of