The field of cybersecurity is rapidly evolving with the integration of large language models (LLMs) to improve security measures. Recent research has focused on leveraging LLMs to automate complex tasks, improve operational efficiency, and enable reasoning-driven security analytics. One notable direction is the use of LLMs in identifying and mitigating disinformation campaigns, with approaches such as extracting structured cyber threat intelligence indicators from unstructured disinformation content. Additionally, LLMs are being explored for their potential in human-AI collaboration, providing intelligent support to non-experts in reasoning through complex cybersecurity problems. Noteworthy papers include 'CAMOUFLAGE: Exploiting Misinformation Detection Systems Through LLM-driven Adversarial Claim Transformation', which presents an iterative approach to create adversarial claim rewritings that manipulate evidence retrieval and mislead claim-evidence comparison. Another notable paper is 'Holmes: Automated Fact Check with Large Language Models', which proposes an end-to-end framework featuring a novel evidence retrieval method that assists LLMs in collecting high-quality evidence.
Advances in Large Language Models for Cybersecurity
Sources
CAMOUFLAGE: Exploiting Misinformation Detection Systems Through LLM-driven Adversarial Claim Transformation
Towards Effective Identification of Attack Techniques in Cyber Threat Intelligence Reports using Large Language Models
Elevating Cyber Threat Intelligence against Disinformation Campaigns with LLM-based Concept Extraction and the FakeCTI Dataset
AI-Driven IRM: Transforming insider risk management with adaptive scoring and LLM-based threat detection