The field of multimodal fact-checking and sarcasm detection is rapidly evolving, with a focus on developing more accurate and efficient methods for verifying the veracity of online content. Recent research has highlighted the importance of considering multiple modalities, such as text, images, and videos, when evaluating the truthfulness of a claim. Additionally, there is a growing recognition of the need for more nuanced and context-dependent approaches to sarcasm detection, particularly in low-resource settings. Notable papers in this area include M4FC, a multimodal, multilingual, and multicultural fact-checking dataset, and Teaching Sarcasm, a framework for few-shot multimodal sarcasm detection via distillation. M4FC provides a comprehensive benchmark for evaluating fact-checking models, while Teaching Sarcasm enables parameter-efficient fine-tuning methods to achieve strong results in few-shot scenarios. Overall, these advances have the potential to significantly improve the accuracy and effectiveness of fact-checking and sarcasm detection systems, and to contribute to a more informed and critically thinking online community.