The field of graph neural networks (GNNs) is rapidly advancing, with a focus on improving expressivity, efficiency, and adaptability. Recent developments have led to the introduction of novel architectures and techniques, such as leveraging high-order derivatives to enhance expressivity, and using adaptive node feature selection to improve performance. Additionally, there is a growing interest in applying GNNs to real-world problems, including network epidemics and graph alignment. Noteworthy papers in this area include On The Expressive Power of GNN Derivatives, which introduces a novel method for enhancing GNN expressivity using high-order derivatives. Another notable paper is Bootstrap Learning for Combinatorial Graph Alignment with Sequential GNNs, which proposes a chaining procedure for graph alignment and achieves substantial improvements over existing methods. Directional Sheaf Hypergraph Networks is also a significant contribution, as it introduces a framework for learning on directed and undirected hypergraphs, achieving relative accuracy gains of up to 20% on real-world datasets. Overall, these advancements demonstrate the potential of GNNs to tackle complex problems and their increasing importance in the field of machine learning.