The field of artificial intelligence is moving towards the development of more secure and autonomous systems. Researchers are focusing on creating systems that can learn from interactions with the real world and develop physical intuition, which is a crucial aspect of achieving true autonomy. Another key area of research is the development of secure access control frameworks for computer-use agents, which can prevent malicious actions and protect users from potential threats. Additionally, there is a growing concern about the blind goal-directedness of computer-use agents, which can lead to unintended consequences. Overall, the field is shifting towards the creation of more robust, secure, and autonomous AI systems that can interact with the world in a more human-like way. Noteworthy papers include: WoW, which presents a world model trained on robot interaction trajectories and achieves state-of-the-art performance in physical consistency and causal reasoning. STAC, which introduces a novel multi-turn attack framework that exploits agent tool use and highlights the need for stronger training- or inference-time interventions. ExoPredicator, which proposes a framework for abstract world models that enables fast planning and generalization to held-out tasks. BLIND-ACT, which characterizes blind goal-directedness in computer-use agents and establishes a foundation for future research on studying and mitigating this fundamental risk.