Advances in Graph Representation Learning and Topological Deep Learning

The field of graph representation learning and topological deep learning is rapidly advancing, with a focus on developing innovative methods that can effectively capture complex relationships and structures in data. Recent research has explored the use of contrastive learning, graph neural networks, and topological techniques to improve the performance of graph-based models. Notably, the incorporation of biological perturbations and directed higher-order motifs has led to significant improvements in tasks such as patient hazard prediction and brain activity decoding. Furthermore, the development of principled topological models and the use of spectral graph neural networks have shown promise in learning representations of graph data. Notable papers include: Directed Semi-Simplicial Learning with Applications to Brain Activity Decoding, which introduced Semi-Simplicial Neural Networks to capture directed higher-order patterns in brain networks. CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning, which proposed a framework to preserve cellular topology during contrastive learning and mitigate informational redundancy.

Sources

Supervised Graph Contrastive Learning for Gene Regulatory Network

Directed Semi-Simplicial Learning with Applications to Brain Activity Decoding

CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning

Directed Graph Grammars for Sequence-based Learning

Walking the Weight Manifold: a Topological Approach to Conditioning Inspired by Neuromodulation

Hyperbolic-PDE GNN: Spectral Graph Neural Networks in the Perspective of A System of Hyperbolic Partial Differential Equations

Bidirectional predictive coding

Subgraph Gaussian Embedding Contrast for Self-Supervised Graph Representation Learning

Understanding Mode Connectivity via Parameter Space Symmetry

Built with on top of