Advances in Large Language Models for Code Generation and Security

The field of Large Language Models (LLMs) for code generation and security is rapidly evolving, with a focus on improving the effectiveness and robustness of these models. Recent research has explored the use of LLMs for smart contract generation, code summarization, and UI synthesis, with promising results. One notable trend is the development of novel frameworks and approaches that integrate LLMs with other techniques, such as finite state machines and variational prefix tuning, to improve the quality and diversity of generated code. Additionally, there is a growing emphasis on evaluating the security and robustness of LLMs, with studies investigating their ability to understand code semantics and resist attacks. Noteworthy papers in this area include PRIMG, which proposes a framework for efficient LLM-driven test generation, and Are Large Language Models Robust in Understanding Code Against Semantics-Preserving Mutations?, which evaluates the robustness of LLMs in understanding code semantics. Other notable papers include Web-Bench, which introduces a new benchmark for LLM code generation, and Can You Really Trust Code Copilots?, which proposes a multi-task benchmark for evaluating LLM code security.

Sources

PRIMG : Efficient LLM-driven Test Generation Using Mutant Prioritization

Benchmarking and Revisiting Code Generation Assessment: A Mutation-Based Approach

Web-Bench: A LLM Code Benchmark Based on Web Standards and Frameworks

Guiding LLM-based Smart Contract Generation with Finite State Machine

Variational Prefix Tuning for Diverse and Accurate Code Summarization Using Pre-trained Language Models

UICopilot: Automating UI Synthesis via Hierarchical Code Generation from Webpage Designs

Are Large Language Models Robust in Understanding Code Against Semantics-Preserving Mutations?

Can You Really Trust Code Copilots? Evaluating Large Language Models from a Code Security Perspective

Built with on top of