Explainable AI and Human-Centered Evaluation

The field of artificial intelligence is moving towards a more explainable and human-centered approach. Researchers are focusing on developing frameworks and methodologies that prioritize transparency, interpretability, and accountability in AI systems. This shift is driven by the need to address the challenges posed by AI-generated misinformation, ensure patient trust in healthcare, and develop more effective evaluation metrics for AI models. Noteworthy papers in this regard include: Safeguarding Patient Trust in the Age of AI, which presents an explainable AI framework to combat medical misinformation, An Approach to Grounding AI Model Evaluations in Human-derived Criteria, which proposes a novel approach to augment existing benchmarks with human-derived evaluation criteria.

Sources

Exam Readiness Index (ERI): A Theoretical Framework for a Composite, Explainable Index

Generative KI f\"ur TA

Safeguarding Patient Trust in the Age of AI: Tackling Health Misinformation with Explainable AI

Evaluating Quality of Gaming Narratives Co-created with AI

An Approach to Grounding AI Model Evaluations in Human-derived Criteria

From Vision to Validation: A Theory- and Data-Driven Construction of a GCC-Specific AI Adoption Index

If generative AI is the answer, what is the question?

Performance Assessment Strategies for Generative AI Applications in Healthcare

Accelerating AI Development with Cyber Arenas

Built with on top of