The field of explainable AI is rapidly advancing, with a growing focus on developing methods that provide transparent and reliable explanations for machine learning models. Recent research has highlighted the importance of considering the multifaceted properties of explanations, including stability and target sensitivity. Additionally, there is a growing recognition of the need to address background biases in post-hoc concept embeddings and to develop more effective evaluation metrics for explanation quality. Noteworthy papers in this area include Uncovering the Structure of Explanation Quality with Spectral Analysis, which proposes a new framework for evaluating explanation quality, and On Background Bias of Post-Hoc Concept Embeddings in Computer Vision DNNs, which investigates the prevalence of background biases in state-of-the-art post-hoc C-XAI approaches.