Advances in Graph Representation and Hypothesis Testing

The field of graph representation and hypothesis testing is moving towards more efficient and expressive methods. Recent developments have focused on designing novel graph neural networks, such as random search neural networks, that can capture global structure in sparse graphs. Additionally, there is a growing interest in testing properties of functions on hypergrids, including pattern freeness and monotonicity. Noteworthy papers in this area include: Instance-Adaptive Hypothesis Tests with Heterogeneous Agents, which establishes a connection between statistical decision theory and mechanism design. Random Search Neural Networks for Efficient and Expressive Graph Learning, which proposes a new approach to graph representation learning that guarantees full node coverage. Testing forbidden order-pattern properties on hypergrids, which initiates a systematic study of pattern freeness on higher-dimensional grids. Charting the Design Space of Neural Graph Representations for Subgraph Matching, which undertakes a comprehensive exploration of the design space for graph matching networks.

Sources

Instance-Adaptive Hypothesis Tests with Heterogeneous Agents

On Local Limits of Sparse Random Graphs: Color Convergence and the Refined Configuration Model

Relative-error unateness testing

Contextual Tokenization for Graph Inverted Indices

Random Search Neural Networks for Efficient and Expressive Graph Learning

Iteratively Refined Early Interaction Alignment for Subgraph Matching based Graph Retrieval

Testing forbidden order-pattern properties on hypergrids

Charting the Design Space of Neural Graph Representations for Subgraph Matching

Scaling Up Bayesian DAG Sampling

Built with on top of