The field of software engineering is witnessing significant advancements with the integration of Large Language Models (LLMs). Recent developments indicate a shift towards leveraging LLMs for improving code generation, automated bug fixing, and enhancing software development processes. Notably, LLMs are being used to generate high-quality code, detect bugs, and even repair them, showcasing their potential in reducing manual effort and improving software reliability. Furthermore, research is focusing on combining LLMs with other techniques such as retrieval-augmented generation and graph-based methods to enhance their capabilities. The use of LLMs in software engineering is expected to continue growing, with potential applications in areas like automated testing, code review, and project management. Some noteworthy papers include LLM-Based Program Generation for Triggering Numerical Inconsistencies Across Compilers, which presents a framework for generating programs to detect numerical inconsistencies, and RepoDebug, a repository-level multi-task and multi-language debugging evaluation of large language models. Another significant contribution is the development of TreeGPT, a novel hybrid architecture for abstract syntax tree processing, which has shown promising results in neural program synthesis tasks.
Advancements in Software Engineering with Large Language Models
Sources
Formalizing Linear Motion G-code for Invariant Checking and Differential Testing of Fabrication Tools
RepoDebug: Repository-Level Multi-Task and Multi-Language Debugging Evaluation of Large Language Models
TreeGPT: A Novel Hybrid Architecture for Abstract Syntax Tree Processing with Global Parent-Child Aggregation
Natural Language-Programming Language Software Traceability Link Recovery Needs More than Textual Similarity