The field of AI is moving towards a greater emphasis on explainability and trustworthiness, with a focus on understanding the limitations and potential biases of AI systems. Recent research has highlighted the importance of re-evaluating the use of explanations to build trust in AI-based systems, particularly in high-risk domains. The use of Large Language Models (LLMs) as devil's advocates to actively interrogate AI explanations and present alternative interpretations is also being explored. Additionally, there is a growing recognition of the need for privacy-aware agent design and the development of more reliable and trustworthy GUI agents. Noteworthy papers in this area include: The paper 'Even explanations will not help in trusting [this] fundamentally biased system' which found that explanations did not lead to better decision-making and emphasized the importance of re-evaluating the use of explanations to build trust. The paper 'Don't Just Translate, Agitate' which proposed using LLMs as devil's advocates to facilitate critical engagement with AI systems and reduce overreliance caused by misinterpreted explanations.