The field of multimodal research is moving towards enhancing the reliability and transparency of AI-generated answers through the development of innovative methods for reasoning and explanation. A key direction is the integration of Chain-of-Thought (CoT) reasoning with visual evidence attribution, enabling models to provide verifiable and interpretable predictions. Another area of focus is the improvement of multimodal source attribution, which aims to enhance the reliability of AI-generated answers by including references for each statement. The development of benchmarks and evaluation metrics for multimodal source attribution is also a crucial aspect of current research. Furthermore, the application of reinforcement learning and contrastive training is being explored to enhance the quality of multimodal embeddings and improve multimodal retrieval performance. Noteworthy papers in this area include: Look As You Think, which introduces a reinforcement learning framework for training models to produce verifiable reasoning paths with consistent attribution. Step-Audio-R1, which successfully unlocks reasoning capabilities in the audio domain through a Modality-Grounded Reasoning Distillation framework. Reasoning Guided Embeddings, which proposes a method for explicitly incorporating reasoning into the embedding process to enhance representation quality.