Advancements in LLM-Driven Software Engineering

The field of software engineering is witnessing a significant shift with the increasing adoption of Large Language Models (LLMs) in various aspects of software development. Recent studies have demonstrated the potential of LLMs in automating tasks such as code migration, automated testing, and release note generation. The use of LLMs has shown promising results in improving the efficiency and effectiveness of software development processes. Notably, LLM-driven approaches have been successful in identifying impacted requirements, generating review comments, and automating program repair. Furthermore, the integration of LLMs with agent-based systems has enabled the development of more robust and scalable software engineering frameworks. While there are still challenges to be addressed, such as ensuring the validity and reliability of LLM-driven systems, the current trend suggests a promising future for LLM-driven software engineering. Noteworthy papers in this area include 'Agentic LLMs for REST API Test Amplification' and 'EvoDev: An Iterative Feature-Driven Framework for End-to-End Software Development with LLM-based Agents', which showcase innovative applications of LLMs in software engineering.

Sources

Agentic LLMs for REST API Test Amplification: A Comparative Study Across Cloud Applications

Validity Is What You Need

What a diff makes: automating code migration with large language models

Understanding Code Agent Behaviour: An Empirical Study of Success and Failure Trajectories

LLM-Driven Cost-Effective Requirements Change Impact Analysis

Issue-Oriented Agent-Based Framework for Automated Review Comment Generation

AgentGit: A Version Control Framework for Reliable and Scalable LLM-Powered Multi-Agent Systems

A Comprehensive Empirical Evaluation of Agent Frameworks on Code-centric Software Engineering Tasks

CodeClash: Benchmarking Goal-Oriented Software Engineering

HAFixAgent: History-Aware Automated Program Repair Agent

Exploringand Unleashing the Power of Large Language Models in CI/CD Configuration Translation

SmartMLOps Studio: Design of an LLM-Integrated IDE with Automated MLOps Pipelines for Model Development and Monitoring

SWE-Sharp-Bench: A Reproducible Benchmark for C# Software Engineering Tasks

EvoDev: An Iterative Feature-Driven Framework for End-to-End Software Development with LLM-based Agents

Neural Network Interoperability Across Platforms

ReleaseEval: A Benchmark for Evaluating Language Models in Automated Release Note Generation

1 PoCo: Agentic Proof-of-Concept Exploit Generation for Smart Contracts

From Code Changes to Quality Gains: An Empirical Study in Python ML Systems with PyQu

Security Analysis of Agentic AI Communication Protocols: A Comparative Evaluation

Benchmarking and Studying the LLM-based Agent System in End-to-End Software Development

Speed at the Cost of Quality? The Impact of LLM Agent Assistance on Software Development

Built with on top of