The field of artificial intelligence is moving towards developing more trustworthy and transparent systems. Researchers are focusing on creating frameworks that can quantify and enhance trustworthiness in multimodal systems, such as those that integrate text, images, and other forms of data. This includes the development of benchmarking frameworks and assessment methods that can evaluate the performance and reliability of these systems. Additionally, there is a growing emphasis on responsible AI deployment, with a focus on addressing complex social problems and mitigating potential harms. The use of blockchain and AI governance is also being explored to enhance transparency, security, and compliance in various applications. Furthermore, the increasing sophistication of deepfakes and other forms of generative artificial intelligence is leading to a recognition of the need for a new security mindset, one that systematically doubts information perceived through the senses and establishes rigorous verification protocols. Noteworthy papers include: Evaluating VisualRAG, which introduces a systematic, quantitative benchmarking framework to measure trustworthiness in multimodal generative AI. Peer Review as Structured Commentary, which proposes a transparent, identity-linked, and reproducible system of scholarly evaluation. Ask before you Build, which introduces the Radical Questioning framework as a pre-project ethical assessment tool. Bridging Ethical Principles and Algorithmic Methods, which combines ethical components of Trustworthy AI with algorithmic processes. AI-Governed Agent Architecture for Web-Trustworthy Tokenization of Alternative Assets, which proposes an AI-governed agent architecture for trustworthy tokenization. The Age of Sensorial Zero Trust, which presents a scientific analysis of the need to systematically doubt information perceived through the senses. Can Artificial Intelligence solve the blockchain oracle problem, which critically assesses the role of AI in tackling the oracle problem.