The field of large language model reasoning is moving towards more efficient and effective methods of chain-of-thought reasoning. Researchers are exploring new paradigms, such as streaming thinking, self-exploring deep reasoning, and deep self-evolving reasoning, to improve the performance of large language models in complex reasoning tasks. These innovations aim to address limitations such as unnecessary latency, overthinking, and underthinking, and to enable models to think more like humans. Notable papers in this area include: StreamingThinker, which enables large language models to think while reading and reduces token waiting time and latency. SEER, which introduces a self-exploring deep reasoning framework for code generation that explores diverse reasoning paths and assesses intermediate step quality. Deep Self-Evolving Reasoning, which demonstrates that even weak verification and refinement capabilities can be substantially extended through a probabilistic paradigm. SmartSwitch, which advances LLM reasoning by overcoming underthinking via promoting deeper thought exploration.