The field of large language models is moving towards more efficient training methods, with a focus on distributed training, parallelism strategies, and optimization techniques. Researchers are exploring the interactions between hardware, system topology, and model execution to improve scalability and reliability. Notable papers in this area include: Investigating the efficiency of distributed training from a power, performance, and thermal perspective, which provides insights into the complex interactions between hardware and system topology. Pre-training under infinite compute, which shows that simple algorithmic improvements can enable significantly more data-efficient pre-training in a compute-rich future.