The field of high-performance computing and large language models is witnessing significant advancements, driven by the need for efficient and scalable solutions. Researchers are exploring innovative approaches to optimize workload interference, predict application runtime, and improve GPU utilization. Notably, the development of surrogate models, hybrid simulation techniques, and optimized attention mechanisms are enabling more accurate and efficient processing of complex workloads. Furthermore, advancements in load-balancing, task-data orchestration, and hierarchical scheduling are enhancing the performance and scalability of distributed systems. Noteworthy papers include: SMART, which presents a surrogate model for predicting application runtime in Dragonfly systems, outperforming existing baselines. Optimizing Mixture of Block Attention introduces FlashMoBA, a hardware-aware CUDA kernel that enables efficient MoBA execution, achieving up to 14.7x speedup over FlashAttention-2. Harli improves GPU utilization by co-locating parameter-efficient finetuning tasks with LLM decode instances, increasing finetune throughput by 46.2% on average. TD-Orch presents a scalable load-balancing framework for distributed systems, achieving up to 2.7x speedup over existing baselines. FuseSampleAgg fuses neighbor sampling and mean aggregation for mini-batch GNNs, reducing memory traffic and overhead, with step time speedups up to 51x. Hyperion proposes a hierarchical two-stage framework for parallel LLM acceleration in multi-tier networks, reducing end-to-end latency by up to 52.1%.