The field of Artificial Intelligence (AI) is moving towards a more human-centered approach, with a focus on developing systems that are not only efficient but also transparent, trustworthy, and respectful of diverse users' needs. This shift is driven by the need to address the limitations of current evaluation metrics, which often fail to capture the complexities of human communication and decision-making. Researchers are exploring new frameworks and methods that integrate human feedback, explainability, and logical equivalence to advance the development of AI systems that are more aligned with human values and expectations.
Noteworthy papers in this area include: The Human-Centered Readability Score (HCRS) framework, which proposes a five-dimensional evaluation framework grounded in Human-Computer Interaction (HCI) and health communication research. The investigation of aggregating multi-model explanations to enhance trustworthiness, which showcases the potential of increasing trust in AI systems by leveraging multiple models' predictive power. The use of automated theorem proving to evaluate neural semantic parsers, which highlights the limits of graph-based metrics for reasoning-oriented applications. The development of a three-component framework for customer churn analytics, which integrates explainable AI, survival analysis, and RFM profiling to support personalized retention strategies.