The field of large language models (LLMs) is moving towards a deeper understanding of their vulnerabilities and potential attack vectors. Recent research has highlighted the importance of considering the safety implications of extended contexts in LLMs, as well as the potential for energy-latency attacks. Additionally, there is a growing interest in understanding the weaknesses of LLMs in handling unknown invariance out-of-distribution scenarios and the impact of shortcut learning on their performance. Noteworthy papers in this area include: NINJA, which introduces a method for jailbreaking aligned LMs by appending benign, model-generated content to harmful user goals. LoopLLM, which proposes an energy-latency attack framework based on repetitive generation. Why does weak-OOD help, which further advances the understanding of OOD-based VLM jailbreak methods. Bot Meets Shortcut, which proposes mitigation strategies based on large language models to tackle the challenge of shortcut learning. Self-HarmLLM, which explores the possibility of a model's own output becoming a new attack vector.