Explainable AI and Human-Centered Approaches

The field of artificial intelligence is moving towards more explainable and human-centered approaches. Researchers are focusing on developing systems that can provide transparent and engaging explanations for their recommendations and decisions. This shift is driven by the need for increased trust and understanding in AI-driven systems, particularly in areas such as public health and biomedical sciences.

Noteworthy papers in this area include: CityHood, which presents an interactive and explainable travel recommendation system that provides personalized recommendations at city and neighborhood levels. PHAX, which introduces a structured argumentation framework for user-centered explainable AI in public health and biomedical sciences. DGP, which proposes a dual-granularity prompting framework for fraud detection with graph-enhanced large language models, improving performance by up to 6.8% over state-of-the-art methods.

Sources

CityHood: An Explainable Travel Recommender System for Cities and Neighborhoods

Towards LLM-Enhanced Group Recommender Systems

Empathy in Explanation

Finding Uncommon Ground: A Human-Centered Model for Extrospective Explanations

DGP: A Dual-Granularity Prompting Framework for Fraud Detection with Graph-Enhanced LLMs

PHAX: A Structured Argumentation Framework for User-Centered Explainable AI in Public Health and Biomedical Sciences

LLMs Between the Nodes: Community Discovery Beyond Vectors

An Interpretable Data-Driven Unsupervised Approach for the Prevention of Forgotten Items

Built with on top of