The field of large language models is moving towards more efficient and effective reasoning capabilities. Recent research has focused on mitigating overthinking, a common issue where models generate excessively long and redundant reasoning chains. Various approaches have been proposed to address this issue, including dynamic compression, conditional token selection, and manifold steering. These methods aim to reduce computational overhead and improve the accuracy of large language models. Notably, some papers have introduced novel frameworks and techniques, such as Auto Long-Short Reasoning and State Machine Reasoning, which enable models to adaptively control their reasoning depth and generate more concise reasoning paths. Noteworthy papers include Amplify Adjacent Token Differences, which proposes a novel approach to mitigate Cyclical Reasoning, and TrimR, which introduces a verifier-based framework for dynamic CoT compression. ConciseRL is also notable for its conciseness-guided reinforcement learning framework, which guides models toward generating correct and concise reasoning traces.