The field of software development is witnessing significant advancements with the integration of Large Language Models (LLMs). Recent studies have demonstrated the potential of LLMs in improving code-comment synchronization, automated unit test generation, and code refactoring. The use of LLMs has shown promise in reducing technical debt, improving code quality, and enhancing software testing education. Notably, LLMs have been successfully applied to generate high-quality comments, detect self-admitted technical debt, and optimize knowledge utilization for multi-intent comment generation. Furthermore, LLMs have been used to automate program repair, reduce test re-runs, and improve the efficiency of order-dependent test detection. While there are still limitations to be addressed, the current developments indicate a positive direction for the field.
Noteworthy papers include: R2ComSync, which proposes an ICL-based code-comment synchronization approach enhanced with retrieval and re-ranking, achieving superior performance over other approaches. LSPRAG, which presents a framework for concise-context retrieval tailored for real-time, language-agnostic unit test generation, increasing line coverage by up to 213.31% for Java.