The field of high-performance computing (HPC) is witnessing significant developments in communication protocols and frameworks, driven by the increasing complexity of HPC architectures and the growing adoption of irregular scientific algorithms. Researchers are focusing on improving the performance and scalability of collective operations, which are crucial for both HPC applications and large-scale AI training and inference. Novel frameworks and extensions are being introduced to simplify collective operations benchmarking and to support asynchronous, multithreaded communication. These innovations are expected to enhance the overall efficiency and productivity of HPC systems. Noteworthy papers in this area include: PICO, which presents a lightweight and extensible framework for collective operations benchmarking, and ClusterFusion, which introduces cluster-level communication primitives to expand operator fusion scope for large language model inference. Examining MPI and its Extensions for Asynchronous Multithreaded Communication also presents a comprehensive evaluation of MPI extensions for asynchronous multithreaded communication.