The field of artificial intelligence is moving towards increased transparency and explainability, particularly in large language models. Recent research has focused on developing methods to evaluate and improve the explainability of these models, including benchmarking techniques and frameworks for assessing their performance. One of the key areas of focus is on co-constructive explanation dialogues, where the model engages with users to provide dynamic explanations tailored to their needs. Another important aspect is the development of object detection explainable AI evaluation frameworks, which provide a comprehensive assessment of XAI methods in object detection. Furthermore, researchers are exploring the application of large language models in various domains, such as digital advertising and e-government services, where explainability is crucial for building trust and improving decision-making. Noteworthy papers include:
- BELL: Benchmarking the Explainability of Large Language Models, which introduces a standardized benchmarking technique for evaluating the explainability of large language models.
- ODExAI: A Comprehensive Object Detection Explainable AI Evaluation, which provides a framework for assessing XAI methods in object detection.
- Against Opacity: Explainable AI and Large Language Models for Effective Digital Advertising, which proposes a novel system that combines large language models with explainable AI to improve digital advertising strategies.