The field of large language model (LLM) agents is rapidly advancing, with a focus on improving their ability to learn and adapt in complex environments. One key area of research is the development of methods for learning instance-level context, which enables LLM agents to make decisions based on precise and persistent facts. Another important direction is the use of advisor models, which can steer black-box LLMs to improve their performance on specific tasks. Additionally, researchers are exploring the use of just-in-time episodic feedback, agentic context engineering, and summarization-based context management to improve the efficiency and effectiveness of LLM agents. Notably, some papers have introduced innovative approaches such as using offline knowledge to improve LLM agents' adaptation, evolving contexts for self-improving language models, and learning to use computers from online videos. Noteworthy papers include: Beyond Manuals and Tasks, which introduces a task-agnostic method for instance-level context learning. How to Train Your Advisor, which shows that advisor models can outperform static prompt optimizers in multiple domains. Just-in-time Episodic Feedback Hinter, which presents a system that distills offline traces into compact, context-aware hints. Agentic Context Engineering, which introduces a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies. Watch and Learn, which converts human demonstration videos into executable UI trajectories at scale. Agent-in-the-Loop, which implements a continuous data flywheel for iteratively improving an LLM-based customer support system. Scaling LLM Multi-turn RL with End-to-end Summarization-based Context Management, which introduces summarization-based context management to training.