Advances in Adversarial Robustness and Geometric Deep Learning

The field of deep learning is moving towards improving adversarial robustness and incorporating geometric principles into model design. Researchers are exploring new methods to defend against adversarial attacks, such as gradient-feature alignment and radial compensation, which show promise in improving model robustness. Additionally, there is a growing interest in geometric deep learning, with studies on hyperbolic networks, Riemannian manifolds, and Kähler geometry. These advancements have the potential to lead to more robust and generalizable models. Noteworthy papers include: Transferable Hypergraph Attack via Injecting Nodes into Pivotal Hyperedges, which presents a novel framework for attacking hypergraph neural networks. Volatility in Certainty (VC): A Metric for Detecting Adversarial Perturbations, which introduces a new metric for detecting adversarial examples. Radial Compensation: Stable and Semantically Decoupled Generative Models on Riemannian Manifolds, which proposes a method for stable generative models on curved spaces.

Sources

Transferable Hypergraph Attack via Injecting Nodes into Pivotal Hyperedges

Volatility in Certainty (VC): A Metric for Detecting Adversarial Perturbations During Inference in Neural Network Classifiers

Robust Bidirectional Associative Memory via Regularization Inspired by the Subspace Rotation Algorithm

CAO: Curvature-Adaptive Optimization via Periodic Low-Rank Hessian Sketching

Training Instabilities Induce Flatness Bias in Gradient Descent

Angular Gradient Sign Method: Uncovering Vulnerabilities in Hyperbolic Networks

DeepDefense: Layer-Wise Gradient-Feature Alignment for Building Robust Neural Networks

Certified but Fooled! Breaking Certified Defences with Ghost Certificates

Radial Compensation: Stable and Semantically Decoupled Generative Models on Riemannian Manifolds

Complex variational autoencoders admit K\"ahler structure

Built with on top of