The field of fact-checking is witnessing significant advancements with the integration of Artificial Intelligence (AI). A key trend is the development of Large Language Models (LLMs) that can automatically generate fact-checking articles, bridging the gap between automated fact-checking and human-driven reporting. Another area of focus is the evaluation of AI-generated news reports, with studies indicating that LLMs can effectively assess the veracity of claims, although there are disparities in their performance depending on the type of information. Furthermore, research highlights the need for actionable policies to regulate the use of AI in fact-checking, ensuring responsible and effective integration into media ecosystems. Noteworthy papers include: CLAIMCHECK, which introduces an annotated dataset for benchmarking LLMs on claim-centric tasks, and Real-Time Evaluation Models for RAG, which presents a comprehensive benchmark of evaluation models to detect hallucinations in Retrieval-Augmented Generation. These studies demonstrate the innovative work being done to advance the field of fact-checking and promote accuracy and trust in media reporting.