The field of graph processing and neural networks is rapidly advancing, with a focus on developing efficient and scalable techniques for handling large-scale data. Recent developments have centered around improving the performance of graph neural networks, with innovations in mini-batching, similarity search, and substructure discovery. Notably, the use of disk-based similarity search and structure-aware randomized mini-batching has led to significant improvements in training time and accuracy. Additionally, the application of Gaussian processes to graph-based problems has shown promise, with low-rank computation methods enabling efficient posterior mean calculation. Some noteworthy papers in this area include:
- Industrial-Scale Neural Network Clone Detection with Disk-Based Similarity Search, which demonstrates the effectiveness of disk-based similarity search for clone detection.
- Efficient GNN Training Through Structure-Aware Randomized Mini-Batching, which presents a novel methodology for improving GNN training efficiency.
- Efficient Learning on Large Graphs using a Densifying Regularity Lemma, which introduces a low-rank factorization of large directed graphs for efficient learning.