The field of large language models (LLMs) is moving towards more efficient training and inference methods, with a focus on pipeline parallelism, adaptive parallelism, and memory efficiency. Researchers are exploring new techniques to mitigate pipeline bubbles, reduce communication overhead, and optimize resource utilization. Notably, innovative approaches such as heterogeneous pipeline design, attention parallel partition, and zero communication overhead sequence parallelism are being proposed to improve throughput, scalability, and cost-efficiency. These advancements have the potential to enable the training of larger and more complex models, while reducing the costs and environmental impact of large-scale AI research. Some papers are particularly noteworthy, including SiPipe, which achieves up to 2.1 times higher throughput and 43% lower per-token latency, and ZeCO, which eliminates communication overhead and achieves a 60% speedup compared to the current state-of-the-art sequence parallelism method.