The field of misinformation research is moving towards a more nuanced understanding of the factors that influence public support for interventions and the development of more transparent and explainable detection systems. Recent work has highlighted the importance of perceived fairness, effectiveness, and intrusiveness in determining public support for misinformation interventions. Additionally, there is a growing trend towards the development of multimodal and interactive detection frameworks that provide interpretable explanations for their predictions. These frameworks have the potential to increase trust in AI systems and improve their usability in real-world decision-making contexts. Noteworthy papers in this area include: From Prediction to Explanation: Multimodal, Explainable, and Interactive Deepfake Detection Framework for Non-Expert Users, which presents a novel framework for deepfake detection that integrates visual, semantic, and narrative layers of explanation. Exploring Content and Social Connections of Fake News with Explainable Text and Graph Learning, which proposes an explainable framework that combines content, social media, and graph-based features to enhance fact-checking.