The field of Large Language Models (LLMs) is moving towards developing more efficient and controllable reasoning capabilities. Researchers are exploring various methods to mitigate overthinking, reduce token consumption, and improve accuracy. One notable direction is the use of budget-aware reasoning, which enables precise control over the length of thought processes. Another approach is to allocate reasoning resources adaptively, allowing models to generate concise answers for simple questions while retaining sufficient reasoning depth for more challenging ones. Noteworthy papers include: BudgetThinker, which introduces a novel framework for empowering LLMs with budget-aware reasoning. DRQA, which proposes a method for dynamic reasoning quota allocation to control overthinking in RLLMs. ThinkDial, which presents an open-recipe end-to-end framework for controllable reasoning through discrete operational modes.