The field of artificial intelligence is moving towards more explainable and human-centered approaches. Researchers are focusing on developing systems that can provide transparent and engaging explanations for their recommendations and decisions. This shift is driven by the need for increased trust and understanding in AI-driven systems, particularly in areas such as public health and biomedical sciences.
Noteworthy papers in this area include: CityHood, which presents an interactive and explainable travel recommendation system that provides personalized recommendations at city and neighborhood levels. PHAX, which introduces a structured argumentation framework for user-centered explainable AI in public health and biomedical sciences. DGP, which proposes a dual-granularity prompting framework for fraud detection with graph-enhanced large language models, improving performance by up to 6.8% over state-of-the-art methods.