The field of graph neural networks (GNNs) is rapidly advancing, with a focus on scalability and efficiency. Recent research has led to the development of novel architectures and techniques, such as pre-propagation GNNs and adaptive high-order neighboring feature fusion, which address the challenges of over-smoothing and scalability. Additionally, there have been significant improvements in data storage models, including the development of enhanced vertex-centric storage models for evolving graphs and adaptive structural encodings for columnar storage. These advancements have the potential to greatly improve the performance and efficiency of GNNs and data storage systems. Noteworthy papers in this area include:
- Graph Learning at Scale: Characterizing and Optimizing Pre-Propagation GNNs, which proposes optimized data loading schemes and tailored training methods to improve PP-GNN training throughput.
- ScaleGNN: Towards Scalable Graph Neural Networks via Adaptive High-order Neighboring Feature Fusion, which introduces a novel framework for large-scale graphs that simultaneously addresses over-smoothing and scalability issues.
- Lance: Efficient Random Access in Columnar Storage through Adaptive Structural Encodings, which describes a novel structural encoding scheme that achieves better random access performance without making trade-offs in scan performance or RAM utilization.