The field of deep neural networks is moving towards improving robustness against various attacks. Researchers are exploring new methods to enhance the transferability of transformation-based attacks, as well as developing novel approaches to adversarial training. One notable direction is the use of dynamic parameter optimization and calibrated adversarial sampling to improve the robustness of deep neural networks. Additionally, there is a growing interest in understanding the dimension-free approximation of deep neural networks for symmetric functions. Noteworthy papers include: Dynamic Parameter Optimization for Highly Transferable Transformation-Based Attacks, which proposes a novel Concentric Decay Model to improve transferability. Calibrated Adversarial Sampling: Multi-Armed Bandit-Guided Generalization Against Unforeseen Attacks, which achieves superior overall robustness while maintaining high clean accuracy. On the Dimension-Free Approximation of Deep Neural Networks for Symmetric Korobov Functions, which proves that the convergence rate and constant prefactor scale at most polynomially with respect to the ambient dimension. Tuning for Two Adversaries: Enhancing the Robustness Against Transfer and Query-Based Attacks using Hyperparameter Tuning, which explores the optimization hyperparameter design space to jointly enhance robustness against both transfer-based and query-based attacks.