The field of computer vision and deep learning is witnessing significant developments in the area of adversarial robustness and defense mechanisms. Researchers are actively exploring innovative methods to enhance the security and reliability of deep neural networks against various types of attacks. A key direction in this field is the investigation of robustness requirements for visual foundation models, which are crucial in computer vision systems. Another important area of research is the development of efficient and effective adversarial training techniques, including adapter-based approaches and adaptive guidance methods. These advancements have the potential to improve the trustworthiness of foundation models and enable their deployment in security-sensitive domains. Noteworthy papers in this area include: AdaGAT, which proposes a novel adaptive guidance adversarial training method to enhance the robustness of deep neural networks. SALMAN, which introduces a unified framework for stability analysis of language models through maps between graph-based manifolds, providing a practical tool for advancing the reliability of transformer-based NLP systems.