Multimodal Emotion Recognition Trends

The field of multimodal emotion recognition is moving towards more effective fusion strategies, leveraging large-scale pre-trained models, and incorporating psychologically meaningful priors to guide multimodal alignment. Researchers are exploring novel approaches to integrate multiple modalities, such as visual, audio, and textual signals, to improve emotion recognition performance. Noteworthy papers include: ECMF, which proposes a novel multimodal emotion recognition framework that leverages large-scale pre-trained models and achieves a substantial performance improvement over the official baseline. VEGA, which introduces a Visual Emotion Guided Anchoring mechanism that constructs emotion-specific visual anchors based on facial exemplars and achieves state-of-the-art performance on IEMOCAP and MELD.

Sources

ECMF: Enhanced Cross-Modal Fusion for Multimodal Emotion Recognition in MER-SEMI Challenge

More Is Better: A MoE-Based Emotion Recognition Framework with Human Preference Alignment

Grounding Emotion Recognition with Visual Prototypes: VEGA -- Revisiting CLIP in MERC

Hardness-Aware Dynamic Curriculum Learning for Robust Multimodal Emotion Recognition with Missing Modalities

A Trustworthy Method for Multimodal Emotion Recognition

Towards Multimodal Sentiment Analysis via Contrastive Cross-modal Retrieval Augmentation and Hierachical Prompts

MoLAN: A Unified Modality-Aware Noise Dynamic Editing Framework for Multimodal Sentiment Analysis

Understanding Textual Emotion Through Emoji Prediction

Conditional Information Bottleneck for Multimodal Fusion: Overcoming Shortcut Learning in Sarcasm Detection

Built with on top of