The field of formal verification is experiencing a significant shift with the integration of large language models (LLMs) to improve the efficiency and scalability of formal methods. Researchers are exploring the use of LLMs as translators to convert real-world code into formal models, enabling the verification of security-critical properties. This approach has the potential to lower the barrier to formal methods and unlock scalable verification across various domains. Furthermore, the concept of autoformalization is gaining traction, with LLMs being used to translate informal input into formal logical representations. A common framework for autoformalization is being proposed to encourage cross-pollination between different fields and advance the development of next-generation AI systems. Noteworthy papers in this area include: What You Code Is What We Prove, which introduces a system that combines static analysis, prompt-guided LLM translation, and symbolic verification to check core security features. Discovering New Theorems via LLMs with In-Context Proof Learning in Lean, which proposes a pipeline for automatically generating mathematical conjectures and proving them in Lean 4 format.