The field of graph representation learning and topological deep learning is rapidly advancing, with a focus on developing innovative methods that can effectively capture complex relationships and structures in data. Recent research has explored the use of contrastive learning, graph neural networks, and topological techniques to improve the performance of graph-based models. Notably, the incorporation of biological perturbations and directed higher-order motifs has led to significant improvements in tasks such as patient hazard prediction and brain activity decoding. Furthermore, the development of principled topological models and the use of spectral graph neural networks have shown promise in learning representations of graph data. Notable papers include: Directed Semi-Simplicial Learning with Applications to Brain Activity Decoding, which introduced Semi-Simplicial Neural Networks to capture directed higher-order patterns in brain networks. CellCLAT: Preserving Topology and Trimming Redundancy in Self-Supervised Cellular Contrastive Learning, which proposed a framework to preserve cellular topology during contrastive learning and mitigate informational redundancy.