Innovations in Human-Centric AI for Mental Health and Education

The field of human-centric AI is rapidly advancing, with a strong focus on developing innovative solutions for mental health and education. Recent research has explored the potential of AI-powered digital twins for personalized well-being, multimodal large language models for analyzing parent-child interactions, and intention-centered frameworks for enhancing emotional support in dialogue systems. These advancements have shown promising results in improving mental health outcomes, enhancing emotional support, and augmenting human capabilities in education. Notably, the development of responsible AI frameworks and human-in-the-loop methodologies has improved the trustworthiness and credibility of AI-driven results. Overall, the field is moving towards more human-centric, transparent, and explainable AI solutions that prioritize user well-being and agency.

Some noteworthy papers in this area include: Human-AI Alignment of Multimodal Large Language Models with Speech-Language Pathologists in Parent-Child Interactions, which demonstrates the feasibility of aligning large language models with expert judgment in analyzing parent-child interactions. RHealthTwin: Towards Responsible and Multimodal Digital Twins for Personalized Well-being, which proposes a principled framework for building and governing AI-powered digital twins for well-being assistance, achieving state-of-the-art results in benchmark datasets.

Sources

Regenerating Daily Routines for Young Adults with Depression through User-Led Indoor Environment Modifications Using Local Natural Materials

Human-AI Alignment of Multimodal Large Language Models with Speech-Language Pathologists in Parent-Child Interactions

IntentionESC: An Intention-Centered Framework for Enhancing Emotional Support in Dialogue Systems

A Novel, Human-in-the-Loop Computational Grounded Theory Framework for Big Social Data

RHealthTwin: Towards Responsible and Multimodal Digital Twins for Personalized Well-being

CounselBench: A Large-Scale Expert Evaluation and Adversarial Benchmark of Large Language Models in Mental Health Counseling

Educators' Perceptions of Large Language Models as Tutors: Comparing Human and AI Tutors in a Blind Text-only Setting

"Is This Really a Human Peer Supporter?": Misalignments Between Peer Supporters and Experts in LLM-Supported Interactions

"I Said Things I Needed to Hear Myself": Peer Support as an Emotional, Organisational, and Sociotechnical Practice in Singapore

Do LLMs Give Psychometrically Plausible Responses in Educational Assessments?

When Large Language Models are Reliable for Judging Empathic Communication

Built with on top of