The field of large language models (LLMs) is rapidly advancing, with a strong focus on safety and security. Recent research has highlighted the vulnerability of LLMs to jailbreak attacks, which aim to bypass safety mechanisms and produce harmful outputs. To address this issue, researchers have proposed various methods to detect and prevent jailbreak attacks, including novel evaluation metrics and defense strategies. The development of more robust safety mechanisms is crucial to prevent the misuse of LLMs. Noteworthy papers in this area include: GeneShift, which proposes a black-box jailbreak attack using a genetic algorithm to optimize scenario shifts, achieving a significant increase in jailbreak success rate. The Jailbreak Tax, which introduces a new metric to evaluate the utility of jailbroken responses and reveals a consistent drop in model utility. Token-Level Constraint Boundary Search, which presents a novel query-based black-box jailbreak attack that searches for tokens near decision boundaries defined by text and image checkers. Bypassing Prompt Injection and Jailbreak Detection, which demonstrates two approaches for bypassing LLM prompt injection and jailbreak detection systems. Exploring Backdoor Attack and Defense for LLM-empowered Recommendations, which proposes a new attack framework and a universal defense strategy to mitigate backdoor attacks in LLM-based recommender systems. DataSentinel, which proposes a game-theoretic method to detect prompt injection attacks. AttentionDefense, which leverages system-prompt attention for explainable defense against novel jailbreaks. Propaganda via AI, which studies semantic backdoors in large language models and introduces a black-box detection framework. GraphAttack, which exploits representational blindspots in LLM safety mechanisms through semantic transformations.