The field of automated code repair and vulnerability detection is moving towards leveraging large language models (LLMs) and machine learning techniques to improve the accuracy and efficiency of static analysis tools. Researchers are exploring the use of LLMs to repair defects in code and detect vulnerabilities, with promising results. One of the key advantages of LLMs is their ability to reason across broader code contexts, leading to higher recall rates. However, this benefit comes with trade-offs, such as higher false-positive ratios and imprecise localisation of issues. As a result, hybrid pipelines that combine LLMs with traditional rule-based scanners are being recommended for high-assurance verification. Noteworthy papers in this area include BitsAI-Fix, which presents an LLM-driven approach for automated lint error resolution, and Large Language Models Versus Static Code Analysis Tools, which provides a systematic benchmark for vulnerability detection. BitsAI-Fix has demonstrated practical feasibility in enterprise environments, resolving over 12,000 static analysis issues with approximately 85% remediation accuracy. Large Language Models Versus Static Code Analysis Tools has shown that LLMs can successfully rival traditional static analysers in finding real vulnerabilities, but with limitations.