Advancements in Software Engineering with Large Language Models

The field of software engineering is witnessing significant developments with the integration of Large Language Models (LLMs). Research is focusing on leveraging LLMs to improve code quality, automate testing, and enhance software maintenance. Currently, the field is moving towards leveraging LLMs for tasks such as clone code detection, automated test case generation, and bug repair. Noteworthy papers in this area include: From Bias To Improved Prompts, which proposed a framework to mitigate prompt bias for clone detection. AKD introduced Adversarial Knowledge Distillation, a novel approach to enhance model robustness and reliability. Synthetic Code Surgery presented a methodology for enhancing Automated Program Repair through synthetic data generation utilizing LLMs. Byam explored the use of LLMs to automate client code updates in response to breaking dependency updates. LLM-Based Detection of Tangled Code Changes investigated the utility of LLMs for detecting tangled code changes. Tests as Prompt introduced a novel benchmark for evaluating LLMs in test-driven development tasks.

Sources

From Bias To Improved Prompts: A Case Study of Bias Mitigation of Clone Detection Models

JustinANN: Realistic Test Generation for Java Programs Driven by Annotations

PyResBugs: A Dataset of Residual Python Bugs for Natural Language-Driven Fault Injection

AKD : Adversarial Knowledge Distillation For Large Language Models Alignment on Coding tasks

Synthetic Code Surgery: Repairing Bugs and Vulnerabilities with LLMs and Synthetic Data

Byam: Fixing Breaking Dependency Updates with Large Language Models

LLM-Based Detection of Tangled Code Changes for Higher-Quality Method-Level Bug Datasets

Tests as Prompt: A Test-Driven-Development Benchmark for LLM Code Generation

Evaluating Large Language Models for the Generation of Unit Tests with Equivalence Partitions and Boundary Values

Built with on top of