The field of software engineering is witnessing significant advancements with the integration of Large Language Models (LLMs). Recent studies have focused on enhancing code generation, improving code quality, and developing more efficient bug detection and repair methods. The general direction of the field is moving towards leveraging LLMs to automate and optimize various software engineering tasks, such as code generation, code review, and testing. Notably, researchers are exploring the use of LLMs in conjunction with other techniques, like static analysis and fuzzing, to improve the accuracy and efficiency of these tasks. Furthermore, there is a growing interest in developing frameworks and benchmarks to evaluate the performance of LLMs in software engineering tasks, ensuring their reliability and effectiveness in real-world applications. Some noteworthy papers in this regard include 'Zero-Shot Detection of LLM-Generated Code via Approximated Task Conditioning', which proposes a novel approach for detecting LLM-generated code, and 'EXPEREPAIR: Dual-Memory Enhanced LLM-based Repository-Level Program Repair', which presents a dual-memory based approach for program repair using LLMs.
Large Language Models in Software Engineering: Improved Code Generation and Analysis
Sources
Which Prompting Technique Should I Use? An Empirical Investigation of Prompting Techniques for Software Engineering Tasks
Expert-in-the-Loop Systems with Cross-Domain and In-Domain Few-Shot Learning for Software Vulnerability Detection