Advances in Efficient Computing for Graph Neural Networks and Deep Learning

The field of graph neural networks and deep learning is moving towards more efficient and scalable computing methods. Researchers are exploring novel approaches to reduce computational costs and improve performance, such as specialized code synthesis, sparse acceleration, and hyperdimensional computing. These innovations have the potential to enable large-scale graph neural network training on commodity hardware and improve the energy efficiency of deep learning models. Notably, some papers have achieved significant speedups and reductions in memory consumption, paving the way for more efficient deployment of these models in various applications. Noteworthy papers include: Morphling, which improves per-epoch training throughput by an average of 20X on CPUs and 19X on GPUs. ESACT, an end-to-end sparse accelerator that achieves efficient sparsity across all transformer components and improves attention-level energy efficiency by 2.95x and 2.26x. VS-Graph, a vector-symbolic graph learning framework that narrows the gap between the efficiency of hyperdimensional computing and the expressive power of message passing, achieving competitive accuracy with modern graph neural networks while accelerating training by a factor of up to 450x.

Sources

Morphling: Fast, Fused, and Flexible GNN Training at Scale

ESACT: An End-to-End Sparse Accelerator for Compute-Intensive Transformers via Local Similarity

Sparse Computations in Deep Learning Inference

VS-Graph: Scalable and Efficient Graph Classification Using Hyperdimensional Computing

Hyperdimensional Computing for Sustainable Manufacturing: An Initial Assessment

A Structure-Aware Irregular Blocking Method for Sparse LU Factorization

Efficient Spatially-Variant Convolution via Differentiable Sparse Kernel Complex

Built with on top of