AI-Driven Software Testing and Validation

The field of software testing and validation is undergoing a significant transformation with the integration of Large Language Models (LLMs). The current trend is towards leveraging LLMs to automate various testing tasks, such as test generation, oracle generation, and code refactoring. This shift is expected to greatly improve engineering productivity and reduce manual effort. Notably, innovative approaches are being proposed to address the limitations of existing testing methods, including the use of functional programming and type systems to translate code into formal representations, and the development of frameworks to validate the equivalence of legacy code and translated modern code. Furthermore, LLM-based testing is being applied to various domains, including quantum software frameworks, to maintain compatibility with rapidly changing APIs. The results of these efforts are promising, with significant improvements in test suite quality and reduction in testing costs. Noteworthy papers include:

  • Towards Automated Formal Verification of Backend Systems with LLMs, which proposes a novel framework for automated formal verification of backend systems using LLMs.
  • Ever-Improving Test Suite by Leveraging Large Language Models, which presents an approach to incrementally augment test suites with test cases that exercise behaviors that emerge in production.

Sources

Towards Automated Formal Verification of Backend Systems with LLMs

Automated Validation of COBOL to Java Transformation

Ever-Improving Test Suite by Leveraging Large Language Models

Test code generation at Ericsson using Program Analysis Augmented Fine Tuned LLMs

Automatic Qiskit Code Refactoring Using Large Language Models

Large Language Models for Unit Testing: A Systematic Literature Review

Built with on top of