The field of large language models (LLMs) is rapidly evolving, with a growing focus on addressing the security vulnerabilities of these models. Recent research has highlighted the potential risks associated with LLMs, including their susceptibility to jailbreak attacks and the potential for misuse. In response, researchers have proposed various solutions to enhance the security and safety of LLMs, including the development of biosecurity agents, visual-driven adversarial attacks, and safety alignment data curation methods. These innovations aim to reduce the attack success rate of LLMs while preserving their benign utility. Notably, some papers have introduced novel attack strategies, such as MetaBreak and ArtPerception, which can be used to improve the robustness of LLMs. Meanwhile, other researchers have proposed defense mechanisms, including Countermind, CALM, and GuardSpace, which demonstrate promising results in detecting and preventing jailbreak attacks. Overall, the field is moving towards a more comprehensive understanding of LLM security and the development of effective countermeasures. Noteworthy papers include MetaBreak, which achieves high jailbreak rates using special token manipulation, and GuardSpace, which preserves safety alignment throughout fine-tuning.