Advances in Robustness and Security for Graph Neural Networks and Diffusion Models

The field of graph neural networks and diffusion models is rapidly advancing, with a focus on improving robustness and security. Recent developments have highlighted the importance of considering the impact of initialization strategies and hyper-parameters on a model's robustness, as well as the need for effective watermarking schemes to protect intellectual property. Researchers are exploring new approaches to enhance robustness, such as singular pooling and manifold-constrained graph condensation, which have shown promising results in improving the resilience of models to adversarial perturbations. Additionally, innovative watermarking methods, including those based on implicit perception of topological invariants and sample-specific clean-label backdoor watermarks, are being proposed to address the challenges of ownership verification and protection. Notable papers include: Noise Aggregation Analysis Driven by Small-Noise Injection, which proposes an efficient membership inference attack method against diffusion models. T2SMark, which presents a two-stage watermarking scheme for diffusion models that balances robustness and diversity. Enhancing Graph Classification Robustness with Singular Pooling, which introduces a novel pooling strategy to improve the robustness of graph neural networks. If You Want to Be Robust, Be Wary of Initialization, which highlights the importance of weight initialization and associated hyper-parameters in determining a model's robustness. Robust GNN Watermarking via Implicit Perception of Topological Invariants, which proposes a trigger-free, black-box verification method for graph neural networks. SSCL-BW, which presents a sample-specific clean-label backdoor watermarking method for dataset ownership verification. Robust Graph Condensation via Classification Complexity Mitigation, which proposes a manifold-constrained robust graph condensation framework to improve the robustness of graph condensation.

Sources

Noise Aggregation Analysis Driven by Small-Noise Injection: Efficient Membership Inference for Diffusion Models

T2SMark: Balancing Robustness and Diversity in Noise-as-Watermark for Diffusion Models

Enhancing Graph Classification Robustness with Singular Pooling

If You Want to Be Robust, Be Wary of Initialization

Robust GNN Watermarking via Implicit Perception of Topological Invariants

SSCL-BW: Sample-Specific Clean-Label Backdoor Watermarking for Dataset Ownership Verification

Robust Graph Condensation via Classification Complexity Mitigation

Built with on top of