The field of graph neural networks (GNNs) and representational learning is rapidly advancing, with a focus on developing more efficient, scalable, and expressive models. Recent developments have highlighted the importance of learning repetition-invariant representations for polymer informatics, as well as the need for more efficient communication protocols in distributed GNN training. Another key area of research is the development of halting mechanisms for recurrent GNNs, which can express all node classifiers definable in graded modal mu-calculus. Additionally, there is a growing interest in applying GNNs to real-world problems, such as traffic flow modeling and graph property learning. Noteworthy papers include: Learning Repetition-Invariant Representations for Polymer Informatics, which introduces a novel method to learn polymer representations that are invariant to the number of repeating units in their graph representations. RapidGNN achieves significant reductions in training time and remote feature fetches, outperforming existing models in both communication efficiency and throughput. HOPSE proposes a message passing-free framework that uses Hasse graph decompositions to derive efficient and expressive encodings over arbitrary higher-order domains. G2PM represents graph instances as sequences of substructures and employs generative pre-training over the sequences to learn generalizable, transferable representations.