The field of cybersecurity and fake news detection is witnessing significant advancements with the integration of Large Language Models (LLMs). Researchers are exploring innovative approaches to leverage LLMs for improved detection and mitigation of cyber threats, such as phishing attacks and fake news. A key trend is the use of multi-agent frameworks, which enable LLMs to engage in debates and critically examine each other's reasoning, leading to more accurate and interpretable results. Another area of focus is the development of intent-based categorization systems, which can classify emails and websites into distinct categories, providing actionable threat information. These advances have the potential to enhance AI safety and effectiveness in cybersecurity and fake news detection. Noteworthy papers include: RedDebate, which proposes a novel multi-agent debate framework for identifying and mitigating unsafe behaviours in LLMs. PhishDebate, which introduces a modular multi-agent LLM-based framework for phishing website detection, achieving high recall and true positive rates.