Formal Verification and Autoformalization with Large Language Models

The field of formal verification is experiencing a significant shift with the integration of large language models (LLMs) to improve the efficiency and scalability of formal methods. Researchers are exploring the use of LLMs as translators to convert real-world code into formal models, enabling the verification of security-critical properties. This approach has the potential to lower the barrier to formal methods and unlock scalable verification across various domains. Furthermore, the concept of autoformalization is gaining traction, with LLMs being used to translate informal input into formal logical representations. A common framework for autoformalization is being proposed to encourage cross-pollination between different fields and advance the development of next-generation AI systems. Noteworthy papers in this area include: What You Code Is What We Prove, which introduces a system that combines static analysis, prompt-guided LLM translation, and symbolic verification to check core security features. Discovering New Theorems via LLMs with In-Context Proof Learning in Lean, which proposes a pipeline for automatically generating mathematical conjectures and proving them in Lean 4 format.

Sources

What You Code Is What We Prove: Translating BLE App Logic into Formal Models with LLMs for Vulnerability Detection

Natural Language Translation of Formal Proofs through Informalization of Proof Steps and Recursive Summarization along Proof Structure

Towards a Common Framework for Autoformalization

Proceedings of the Sixteenth International Symposium on Games, Automata, Logics, and Formal Verification

Discovering New Theorems via LLMs with In-Context Proof Learning in Lean

The mechanization of science illustrated by the Lean formalization of the multi-graded Proj construction

Built with on top of