Large Language Models in Software Testing and Development

The field of software testing and development is witnessing a significant shift with the increasing adoption of Large Language Models (LLMs). Researchers are exploring innovative ways to leverage LLMs to improve the reliability, accuracy, and efficiency of software testing and development. One of the key directions is the integration of LLMs with traditional software engineering practices, such as Test-Driven Development (TDD), to enhance the correctness and reliability of generated outputs. Another area of focus is the use of LLMs for test generation, including API test generation and conformance testing, which has shown promising results in improving code coverage and reducing manual effort. Additionally, LLMs are being used to improve test suites by identifying and generating test cases that explore execution scenarios beyond the scope of existing suites. Noteworthy papers in this area include:

  • Leveraging Test Driven Development with Large Language Models for Reliable and Verifiable Spreadsheet Code Generation, which proposes a structured research framework for integrating TDD with LLM-driven generation.
  • BOSQTGEN, which introduces a novel black-box methodology and tool for API test generation using LLMs.
  • E-Test, which proposes an approach for augmenting test suites with test cases that explore execution scenarios beyond the scope of existing suites.

Sources

Leveraging Test Driven Development with Large Language Models for Reliable and Verifiable Spreadsheet Code Generation: A Research Framework

Software Testing with Large Language Models: An Interview Study with Practitioners

BOSQTGEN: Breaking the Sound Barrier in Test Generation

E-Test: E'er-Improving Test Suites

SODBench: A Large Language Model Approach to Documenting Spreadsheet Operations

Built with on top of