The field of AI research is moving towards developing more safe and trustworthy models. Researchers are focusing on identifying and mitigating implicit toxicity in large multimodal models, which can perpetuate harmful biases and prejudices. To achieve this, new benchmarks and evaluation metrics are being proposed to assess the sensitivity of models to dual-implicit toxicity. Additionally, there is a growing emphasis on transparency and accountability in dataset documentation, with the development of indicators and frameworks to analyze and compare dataset attributes that impact trustworthy and ethical aspects of AI applications. Another area of research is the development of general-purpose generation models that can unify diverse tasks across modalities within a single system, with a focus on robust and scalable workflows. Furthermore, researchers are exploring the use of safety-constrained evolutionary programming and information-flow control to secure AI agents against vulnerabilities. Noteworthy papers include: MDIT-Bench, which introduces a novel toxicity benchmark for evaluating dual-implicit toxicity in large multimodal models. ComfyMind, which presents a collaborative AI system for general-purpose generation via tree-based planning and reactive feedback. MermaidFlow, which redefines agentic workflow generation via safety-constrained evolutionary programming. Securing AI Agents with Information-Flow Control, which explores the use of information-flow control to provide security guarantees for AI agents.