The field of large language models is moving towards developing more efficient reasoning methods. Researchers are focusing on reducing the computational cost and improving the accuracy of these models. One of the key directions is the development of adaptive reasoning methods that can adjust the thinking length according to the problem difficulty. Another important area is the use of distillation techniques to transfer reasoning capabilities to smaller models, making them more efficient and scalable. Noteworthy papers in this regard include Adaptive Effort Control, which enables fine-grained control over the amount of thinking used for a particular query, and DeepCompress, which employs a dual-reward strategy to enhance both the accuracy and efficiency of large reasoning models. Other notable papers include DART, which proposes a difficulty-adaptive reasoning truncation framework, and BARD, which introduces a budget-aware reasoning distillation method. These advancements have the potential to significantly improve the efficiency and accuracy of large language models, making them more suitable for real-world applications.