Advancements in AI-Driven Mental Health and Toxicity Mitigation

The field of AI-driven mental health and toxicity mitigation is rapidly evolving, with a focus on developing innovative solutions to address the growing concerns of online hate speech, mental health crisis handling, and toxicity detection. Recent studies have explored the application of large language models (LLMs) in mental health diagnosis, crisis handling, and therapy recommendation. Notably, the integration of LLMs with clinical guidelines and ontologies has shown promise in improving diagnostic accuracy and therapy outcomes. Furthermore, the development of comprehensive taxonomies for toxicity detection and mitigation has laid the groundwork for more effective and proactive strategies to address online harm. Overall, the field is moving towards more nuanced and context-aware approaches to mental health and toxicity mitigation, with a emphasis on developing scalable, customizable, and data-driven solutions. Noteworthy papers include: DeHate, which introduces a multimodal approach to mitigate hate speech in images, and MDD-Thinker, which presents a reasoning-enhanced LLM framework for major depressive disorder diagnosis. Between Help and Harm is also notable for its evaluation of mental health crisis handling by LLMs, highlighting the need for enhanced safeguards and improved crisis detection. Additionally, PsychoBench provides a comprehensive assessment of LLMs' ability to function as counselors, with advanced models achieving well above the passing threshold.

Sources

DeHate: A Stable Diffusion-based Multimodal Approach to Mitigate Hate Speech in Images

MDD-Thinker: Towards Large Reasoning Models for Major Depressive Disorder Diagnosis

Between Help and Harm: An Evaluation of Mental Health Crisis Handling by LLMs

Toxicity in Online Platforms and AI Systems: A Survey of Needs, Challenges, Mitigations, and Future Directions

MHINDR - a DSM5 based mental health diagnosis and recommendation framework using LLM

Feasibility of Structuring Stress Documentation Using an Ontology-Guided Large Language Model

PychoBench: Evaluating the Psychology Intelligence of Large Language Models

Built with on top of