The field of multimodal deception detection and mitigation is rapidly advancing, with a focus on developing innovative methods to combat AI-generated disinformation. Researchers are exploring new approaches to detect and grounding multimedia manipulation, including the use of multimodal large language models (MLLMs) and multi-agent systems. These systems are designed to improve the scalability, modularity, and explainability of deception detection and correction methods. Notably, the development of transparent and open frameworks, such as those using fixed-decoder architectures and adversarial perturbation generation, is enabling more effective and efficient detection of manipulated multimedia content. Furthermore, the application of information-theoretic measures and self-training approaches is enhancing the robustness of deception detection methods against adversarial attacks. Overall, the field is moving towards more sophisticated and effective methods for detecting and mitigating multimodal deception. Noteworthy papers include: The Coherence Trap, which proposes a new adversarial pipeline for detecting MLLM-crafted narratives. ManuSearch, which introduces a transparent and modular multi-agent framework for deep search in large language models. Fooling the Watchers, which presents an automated adversarial prompt generation framework for breaking AIGC detectors.