The field of large language models (LLMs) is rapidly evolving, with a focus on improving knowledge editing, memory management, and reasoning capabilities. Recent developments have led to the creation of more efficient and effective methods for updating factual knowledge in LLMs, such as balancing knowledge updates between different modules. Additionally, new memory management systems have been proposed to enable personalized LLM agents to maintain dynamically updated memory vectors, providing more personalized services. Furthermore, research has explored the power of chain of thought in LLMs, analyzing its memorization capabilities and limitations. Noteworthy papers include: Balancing Knowledge Updates: Toward Unified Modular Editing in LLMs, which proposes a method to extend the associative memory paradigm to jointly update both MLP and Attn modules. ExplicitLM: Decoupling Knowledge from Parameters via Explicit Memory Banks, which introduces a novel architecture featuring a million-scale external memory bank storing human-readable knowledge as token sequences.