Advances in Graph Neural Networks and Few-Shot Learning

The field of graph neural networks and few-shot learning is rapidly evolving, with a focus on developing more adaptable and efficient models. Recent research has emphasized the importance of capturing diverse computational patterns and high-order structures in graphs, as well as mitigating data sparsity and overfitting issues. A key direction in this area is the development of mixture-of-experts (MoE) models, which enable robust and unsupervised training of heterogeneous experts on graphs. Another notable trend is the design of parameter-free and interpretable models, such as linearized hypergraph neural networks, which achieve state-of-the-art performance while providing insights into the characteristic of a dataset. Noteworthy papers in this area include: ADaMoRE, which introduces a principled framework for unsupervised training of heterogeneous MoE on graphs, achieving state-of-the-art performance in unsupervised node classification and few-shot learning. MoEMeta, which proposes a novel meta-learning framework that disentangles globally shared knowledge from task-specific contexts, enabling both effective generalization and rapid adaptation in few-shot relational learning.

Sources

Adaptive Graph Mixture of Residual Experts: Unsupervised Learning on Diverse Graphs with Heterogeneous Specialization

Parameter-Free Hypergraph Neural Network for Few-Shot Node Classification

Conjugate Relation Modeling for Few-Shot Knowledge Graph Completion

MoEMeta: Mixture-of-Experts Meta Learning for Few-Shot Relational Learning

On the Dataless Training of Neural Networks

Built with on top of