Multi-View Learning and Contrastive Representation

The field of multi-view learning and contrastive representation is moving towards developing more effective and robust methods for integrating diverse representations of the same instances. Recent advances focus on addressing challenges such as untrustworthy fusion, information distortion, and limited collaboration across views. A key direction is the development of novel loss functions and optimization schemes that can enhance the quality of learned representations and improve performance on downstream tasks. Noteworthy papers in this area include: THCRL, which proposes a trusted hierarchical contrastive representation learning approach for multi-view clustering, and Context-Enriched Contrastive Loss, which introduces a novel loss function that improves learning effectiveness and addresses information distortion. Additionally, Task-Aligned Context Selection is a framework that learns to select paired examples that improve task performance, and Adaptive Weighted LSSVM is a method that promotes complementary learning across views.

Sources

THCRL: Trusted Hierarchical Contrastive Representation Learning for Multi-View Clustering

Learning What Helps: Task-Aligned Context Selection for Vision Tasks

Context-Enriched Contrastive Loss: Enhancing Presentation of Inherent Sample Connections in Contrastive Learning Framework

Adaptive Weighted LSSVM for Multi-View Classification

Built with on top of