The field of artificial intelligence is moving towards a more explainable and human-centered approach. Researchers are focusing on developing frameworks and methodologies that prioritize transparency, interpretability, and accountability in AI systems. This shift is driven by the need to address the challenges posed by AI-generated misinformation, ensure patient trust in healthcare, and develop more effective evaluation metrics for AI models. Noteworthy papers in this regard include: Safeguarding Patient Trust in the Age of AI, which presents an explainable AI framework to combat medical misinformation, An Approach to Grounding AI Model Evaluations in Human-derived Criteria, which proposes a novel approach to augment existing benchmarks with human-derived evaluation criteria.