The field of Vision-Language Models (VLMs) is rapidly evolving, with a growing focus on security and robustness. Recent research has highlighted the vulnerabilities of VLMs to adversarial attacks, which can compromise their safety and reliability. In response, researchers have developed innovative methods to improve the security of VLMs, including novel attack frameworks and defense mechanisms. Notably, the development of vector quantization techniques has shown promise in creating a discrete bottleneck against adversarial attacks, while preserving multimodal reasoning capabilities. Furthermore, researchers have introduced new attack methods that can systematically expose safety vulnerabilities in leading defense-equipped VLMs, emphasizing the need for continued innovation in this area. Noteworthy papers include: Stealth Fine-Tuning, which presents a novel attack method that can efficiently break alignment in RVLMs. Multi-Faceted Attack, which introduces a framework that systematically exposes general safety vulnerabilities in leading defense-equipped VLMs. Q-MLLM, which proposes a novel architecture that integrates two-level vector quantization to create a discrete bottleneck against adversarial attacks.