The field of language models is moving towards bridging the gap in reasoning capabilities between closed-source and open-source models. Researchers are exploring innovative methods to transfer knowledge from powerful closed-source models to open-source models, enabling them to perform complex reasoning tasks. One key direction is the use of knowledge distillation techniques, which allow smaller models to learn from larger, more capable models. Another area of focus is the development of reward-guided dataset distillation frameworks, which improve the performance of smaller models on mathematical and complex reasoning tasks. Additionally, researchers are investigating the use of intermediate-sized models as teacher assistants to bridge the capacity and reasoning length gaps in small language models. Notable papers include: ReasonBridge, which introduces a hierarchical knowledge distillation framework that improves reasoning capabilities in open-source models by up to 23% on benchmark tasks. AdvDistill, which proposes a reward-guided dataset distillation framework that significantly improves student model performance for mathematical and complex reasoning tasks. MiCoTA, which employs intermediate-sized models as teacher assistants to bridge the capacity and reasoning length gaps in small language models, achieving significant improvements in reasoning performance. NaturalThoughts, which curates high-quality reasoning traces from a strong teacher model, outperforming existing reasoning datasets on general STEM reasoning benchmarks.