The field of artificial intelligence is moving towards increased collaboration between humans and AI systems, with a focus on developing frameworks that combine the strengths of both. Recent research has highlighted the importance of epistemic trust, or the trustworthiness of AI outputs, and has proposed various methods for evaluating and improving this trust. One key direction is the development of weak-to-strong generalization frameworks, which enable stronger models to learn from weaker ones and improve performance. Another area of research is the development of diagnostic methodologies for assessing AI knowledge claims, such as the Epistemic Suite, which provides a post-foundational approach to evaluating AI outputs. Noteworthy papers in this area include the proposal of RAVEN, a robust weak-to-strong generalization framework, and the introduction of the Epistemic Suite, which provides a diagnostic methodology for assessing AI knowledge claims. Additionally, research on human-AI collaborative uncertainty quantification has shown promising results, with frameworks such as Human-AI Collaborative Uncertainty Quantification achieving higher coverage and smaller set sizes than either humans or AI alone.