The field of AI research is moving rapidly towards addressing the growing concerns of safety and security. Recent developments have highlighted the potential risks associated with the increasing capabilities of large language models and embodied agents. Researchers are working to develop more robust safety assessments and evaluation methods to mitigate these risks. A key area of focus is the development of comprehensive benchmarks for evaluating the safety of AI systems, including those that interact with real-world environments. These benchmarks aim to test the ability of AI systems to respond to hazardous instructions, avoid harmful behavior, and resist attacks. Another important area of research is the development of defense mechanisms against jailbreak attacks, which can compromise the security of AI systems. Overall, the field is advancing towards a more nuanced understanding of the safety and security risks associated with AI and the development of effective mitigation strategies. Noteworthy papers include those that propose innovative benchmarks such as AGENTSAFE and OS-Harm, which provide systematic testing of AI systems under adversarial conditions. The RAS-Eval benchmark is also notable for its comprehensive evaluation of LLM agents in real-world environments.