Advances in Multimodal Learning and Imbalanced Regression

The field of multimodal learning is moving towards addressing the under-optimized problem and improving performance by exploiting imbalanced learning across modalities. Researchers are proposing novel frameworks and strategies to optimize modality dependence and reduce gradient interference. Additionally, there is a growing interest in imbalanced regression, with studies focusing on resampling strategies and pipeline recommendations. Noteworthy papers include: Improving Multimodal Learning via Imbalanced Learning, which introduces the Asymmetric Representation Learning strategy to assist multimodal learning via imbalanced optimization. Boosting Multimodal Learning via Disentangled Gradient Learning proposes a disentangled gradient learning framework to decouple the optimization of the modality encoder and modality fusion module. Robust-Multi-Task Gradient Boosting presents a novel boosting framework that explicitly models and adapts to task heterogeneity during training. Meta-learning for Imbalanced Regression framework trains meta-classifiers to recommend the best pipeline composed of the resampling strategy and learning model per task in a zero-shot fashion.

Sources

Improving Multimodal Learning via Imbalanced Learning

Boosting Multimodal Learning via Disentangled Gradient Learning

Robust-Multi-Task Gradient Boosting

Imbalanced Regression Pipeline Recommendation

Resampling strategies for imbalanced regression: a survey and empirical analysis

Built with on top of