The field of materials modeling is witnessing significant advancements with the integration of graph neural networks (GNNs) and large language models (LLMs). Researchers are exploring the scaling limits of GNNs, developing novel architectures that combine the strengths of GNNs and LLMs, and optimizing training techniques for improved performance. A key direction is the integration of structural and semantic signals in text-attributed graphs, enabling models to capture both topological information and semantic richness. Additionally, there is a growing focus on uncertainty quantification in GNNs, aiming to improve the reliability of predictions, particularly when encountering out-of-domain data. Noteworthy papers include:
- A study on scaling laws of GNNs for atomistic materials modeling, which lays the groundwork for large-scale GNNs with billions of parameters and terabyte-scale datasets.
- The proposal of BiGTex, a novel architecture that tightly integrates GNNs and LLMs through stacked Graph-Text Fusion Units, achieving state-of-the-art performance in node classification and link prediction.