The field of AI research is moving towards a more secure and decentralized approach. Recent developments have focused on evaluating the security of backbone language models in AI agents, with a emphasis on systematic identification and categorization of security risks. Additionally, there is a growing interest in decentralized AI systems, which aim to democratize access to high-quality inference through collective intelligence without sacrificing reliability or security. Noteworthy papers in this area include Breaking Agent Backbones, which introduces a framework for evaluating the security of backbone language models, and Fortytwo, which presents a novel protocol for swarm inference with peer-ranked consensus. Other notable papers include AgentCyTE, which leverages agentic AI to generate cybersecurity training and experimentation scenarios, and SIRAJ, which presents a generic red-teaming framework for arbitrary black-box LLM agents. These advancements have the potential to significantly improve the security and efficiency of AI systems.