The field of natural language processing is moving towards leveraging multi-modal large language models (LLMs) to enhance document fraud detection and scientific text simplification. Recent studies have demonstrated the potential of these models in detecting subtle indicators of fraud and simplifying complex scientific text. The use of LLMs has shown superior zero-shot generalization and outperformed conventional methods on out-of-distribution datasets. However, the rise of generative AI tools has introduced new challenges, such as model-level misperception drift and evidence-level drift, which can degrade the robustness of current multimodal misinformation detection systems. To address these challenges, researchers are exploring novel methods, such as fuzzification-based approaches, to improve the safety and reasoning capabilities of LLMs. Noteworthy papers in this area include: The paper on multi-modal LLMs for document fraud detection, which demonstrated the effectiveness of these models in detecting fraudulent documents. The paper on LLM-guided planning and summary-based scientific text simplification, which presented a two-stage framework for simplifying scientific text. The paper on unveiling trust in multimodal large language models, which proposed a comprehensive benchmark for evaluating and mitigating trustworthiness issues in MLLMs.