The field of deep learning is moving towards improving adversarial robustness and incorporating geometric principles into model design. Researchers are exploring new methods to defend against adversarial attacks, such as gradient-feature alignment and radial compensation, which show promise in improving model robustness. Additionally, there is a growing interest in geometric deep learning, with studies on hyperbolic networks, Riemannian manifolds, and Kähler geometry. These advancements have the potential to lead to more robust and generalizable models. Noteworthy papers include: Transferable Hypergraph Attack via Injecting Nodes into Pivotal Hyperedges, which presents a novel framework for attacking hypergraph neural networks. Volatility in Certainty (VC): A Metric for Detecting Adversarial Perturbations, which introduces a new metric for detecting adversarial examples. Radial Compensation: Stable and Semantically Decoupled Generative Models on Riemannian Manifolds, which proposes a method for stable generative models on curved spaces.