The field of computer vision and natural language processing is witnessing significant advancements in adversarial attacks and visual anomaly detection. Researchers are exploring innovative methods to improve the robustness of models against adversarial perturbations and to detect anomalies in unseen domains. Notably, the development of super transferable adversarial attacks and universal adversarial perturbations is gaining attention. Additionally, the use of language-driven approaches and pre-trained large-scale models is being investigated for applications such as style transfer and anomaly detection. These advancements have the potential to significantly impact the field and enable more robust and reliable models. Noteworthy papers include: X-Transfer Attacks, which introduces a novel attack method that exposes a universal adversarial vulnerability in CLIP models. AdaptCLIP, which presents a simple yet effective method for universal visual anomaly detection based on pre-trained vision-language models. No Query, No Access, which proposes a victim data-based adversarial attack that operates using only victim texts and achieves state-of-the-art performance.