Advances in Document Fraud Detection and Scientific Text Simplification

The field of natural language processing is moving towards leveraging multi-modal large language models (LLMs) to enhance document fraud detection and scientific text simplification. Recent studies have demonstrated the potential of these models in detecting subtle indicators of fraud and simplifying complex scientific text. The use of LLMs has shown superior zero-shot generalization and outperformed conventional methods on out-of-distribution datasets. However, the rise of generative AI tools has introduced new challenges, such as model-level misperception drift and evidence-level drift, which can degrade the robustness of current multimodal misinformation detection systems. To address these challenges, researchers are exploring novel methods, such as fuzzification-based approaches, to improve the safety and reasoning capabilities of LLMs. Noteworthy papers in this area include: The paper on multi-modal LLMs for document fraud detection, which demonstrated the effectiveness of these models in detecting fraudulent documents. The paper on LLM-guided planning and summary-based scientific text simplification, which presented a two-stage framework for simplifying scientific text. The paper on unveiling trust in multimodal large language models, which proposed a comprehensive benchmark for evaluating and mitigating trustworthiness issues in MLLMs.

Sources

Can Multi-modal (reasoning) LLMs detect document manipulation?

LLM-Guided Planning and Summary-Based Scientific Text Simplification: DS@GT at CLEF 2025 SimpleText

Hallucination Detection and Mitigation in Scientific Text Simplification using Ensemble Approaches: DS@GT at CLEF 2025 SimpleText

Drifting Away from Truth: GenAI-Driven News Diversity Challenges LVLM-Based Misinformation Detection

FuSaR: A Fuzzification-Based Method for LRM Safety-Reasoning Balance

Comparative Evaluation of Text and Audio Simplification: A Methodological Replication Study

Unveiling Trust in Multimodal Large Language Models: Evaluation, Analysis, and Mitigation

Built with on top of