The field of natural language processing is witnessing significant advancements in large language models and pretraining methods. Researchers are exploring innovative approaches to improve the performance and efficiency of these models, including the use of synthetic data, curriculum learning, and dynamic vocabulary selection. These developments have the potential to enhance the capabilities of language models in various applications, such as text generation, language understanding, and ordinal classification. Noteworthy papers in this area include BeyondWeb, which introduces a synthetic data generation framework that outperforms state-of-the-art synthetic pretraining datasets, and Nemotron-CC-Math, which presents a high-quality mathematical corpus constructed from Common Crawl using a novel pipeline. Other notable works include VocabTailor, which proposes a dynamic vocabulary selection framework for small language models, and Influence-driven Curriculum Learning, which investigates the use of training data influence as a difficulty metric for curriculum learning.