Advances in Software Development with Large Language Models

The field of software development is witnessing significant advancements with the integration of Large Language Models (LLMs). Recent studies have demonstrated the potential of LLMs in improving code-comment synchronization, automated unit test generation, and code refactoring. The use of LLMs has shown promise in reducing technical debt, improving code quality, and enhancing software testing education. Notably, LLMs have been successfully applied to generate high-quality comments, detect self-admitted technical debt, and optimize knowledge utilization for multi-intent comment generation. Furthermore, LLMs have been used to automate program repair, reduce test re-runs, and improve the efficiency of order-dependent test detection. While there are still limitations to be addressed, the current developments indicate a positive direction for the field.

Noteworthy papers include: R2ComSync, which proposes an ICL-based code-comment synchronization approach enhanced with retrieval and re-ranking, achieving superior performance over other approaches. LSPRAG, which presents a framework for concise-context retrieval tailored for real-time, language-agnostic unit test generation, increasing line coverage by up to 213.31% for Java.

Sources

R2ComSync: Improving Code-Comment Synchronization with In-Context Learning and Reranking

LSPRAG: LSP-Guided RAG for Language-Agnostic Real-Time Unit Test Generation

Understanding Self-Admitted Technical Debt in Test Code: An Empirical Study

Harnessing the Power of Large Language Models for Software Testing Education: A Focus on ISTQB Syllabus

Operationalizing Large Language Models with Design-Aware Contexts for Code Comment Generation

A First Look at the Self-Admitted Technical Debt in Test Code: Taxonomy and Detection

Checkstyle+: Reducing Technical Debt Through The Use of Linters with LLMs

Automated Program Repair Based on REST API Specifications Using Large Language Models

Optimizing Knowledge Utilization for Multi-Intent Comment Generation with Large Language Models

Understanding the Characteristics of LLM-Generated Property-Based Tests in Exploring Edge Cases

Beyond Synthetic Benchmarks: Evaluating LLM Performance on Real-World Class-Level Code Generation

Reduction of Test Re-runs by Prioritizing Potential Order Dependent Flaky Tests

Automated Extract Method Refactoring with Open-Source LLMs: A Comparative Study

Built with on top of