The field of graph neural networks (GNNs) is moving towards more expressive and interpretable models. Recent research has focused on improving the ability of GNNs to handle heterophilic graphs, where neighboring nodes often belong to different classes. This has led to the development of new architectures that can capture higher-order interactions among features and model edge directionality. Additionally, there is a growing interest in understanding the representational geometry of GNNs and how it relates to their design choices. Another key area of research is the development of more nuanced frameworks for analyzing GNNs, moving beyond traditional expressivity theory. Noteworthy papers in this area include: Flow Matters: Directional and Expressive GNNs for Heterophilic Graphs, which proposes a direction-aware GNN architecture that achieves state-of-the-art results on heterophilic graph datasets. What Expressivity Theory Misses: Message Passing Complexity for GNNs, which introduces a new framework for analyzing GNNs that captures practical limitations and provides a more powerful understanding of GNN architectures.