The field of reasoning language models is moving towards developing more advanced and efficient models that can perform complex tasks such as math reasoning, tool-using, and source summarization. Recent developments have focused on improving the performance of small language models, which are more feasible for deployment on constrained infrastructure. Techniques such as mid-training on synthetic datasets, supervised fine-tuning, and reinforcement learning have been shown to be effective in unlocking strong reasoning capabilities in small models. Additionally, there is a growing interest in developing models that can provide native support for citation and grounding with literal quotes, and can integrate multiple features associated with reasoning workflows. Noteworthy papers include: Even Small Reasoners Should Quote Their Sources, which introduces the Pleias-RAG model family, a new generation of small reasoning models that provide native support for citation and grounding with literal quotes. Phi-4-Mini-Reasoning, which presents a systematic training recipe for small language models that achieves state-of-the-art results on math reasoning tasks. Nemotron-Research-Tool-N1, which develops a series of tool-using language models that achieve state-of-the-art results on tool-use benchmarks.