The field of large language models (LLMs) is moving towards more efficient long-context reasoning, with a focus on reducing computational costs and improving performance. Researchers are exploring new paradigms, such as leveraging distilled language models as retrieval algorithms, to achieve significant parameter reduction and acceleration. Additionally, there is a growing interest in developing innovative attention mechanisms, like Top-k sparse attention, which can facilitate optimization-like inference and improve model performance. The use of on-demand expert loading, context-aware mixture-of-experts inference, and memory-augmented models are also being investigated to enhance the efficiency and accuracy of LLMs. Notably, some papers have made significant contributions to the field, including the proposal of novel algorithms and systems that achieve state-of-the-art results. For example, OD-MoE achieves 99.94% expert activation prediction accuracy and delivers approximately 75% of the decoding speed of a fully GPU-cached MoE deployment. MemLoRA enables local deployment of memory-augmented models by equipping small language models with specialized memory adapters, outperforming 10x larger baseline models.