Advances in Compiler Optimization and Code Generation

The field of compiler optimization and code generation is moving towards leveraging large language models (LLMs) and machine learning techniques to improve performance and efficiency. Researchers are exploring new approaches to optimize code generation, such as hierarchical and hardware-aware methods, to achieve better results. Additionally, there is a growing interest in developing frameworks and tools that can effectively translate and optimize code across different architectures and programming languages. Noteworthy papers include: KernelBand, which presents a novel framework for LLM-based kernel optimization, and QiMeng-Kernel, which introduces a macro-thinking micro-coding paradigm for high-performance GPU kernel generation. VecIntrinBench is also notable for providing the first comprehensive benchmark for evaluating intrinsic code migration capabilities for the RISC-V Vector extension. Rustine is another significant contribution, offering a fully automated pipeline for translating large-scale C repositories to idiomatic Rust.

Sources

VecIntrinBench: Benchmarking Cross-Architecture Intrinsic Code Migration for RISC-V Vector

KernelBand: Boosting LLM-based Kernel Optimization with a Hierarchical and Hardware-aware Multi-armed Bandit

QiMeng-Kernel: Macro-Thinking Micro-Coding Paradigm for LLM-Based High-Performance GPU Kernel Generation

Translating Large-Scale C Repositories to Idiomatic Rust

Built with on top of