Contrastive Learning and Representation Advances

The field of representation learning is moving towards exploiting inherent structures and mechanisms in data to improve model performance and interpretability. Researchers are exploring novel contrastive learning frameworks that leverage multiple views and semantic diversity to learn effective embeddings. Additionally, there is a growing interest in integrating human perceptual priors and saliency into model training to align models with human expertise. The development of frameworks that disentangle independent mechanisms and capture their combined effects is also gaining traction. Furthermore, researchers are working on enhancing the robustness and interpretability of latent representations by introducing structured contrastive learning and equivariant canonicalization. Notable papers include: Patent Representation Learning via Self-supervision, which proposes a simple yet effective contrastive learning framework for learning patent embeddings. Understanding InfoNCE, which introduces a novel loss function that enables flexible control over feature similarity alignment. DIVIDE, which disentangles independent mechanisms in scientific datasets using deep encoders and Gaussian Processes. SAGE, which integrates human saliency into network training using contrastive embeddings. Structured Contrastive Learning, which partitions latent space representations into semantic groups for interpretable latent representations. Eq.Bot, which proposes a universal canonicalization framework for robotic manipulation learning using group equivariant theory.

Sources

Patent Representation Learning via Self-supervision

Understanding InfoNCE: Transition Probability Matrix Induced Feature Clustering

DIVIDE: A Framework for Learning from Independent Multi-Mechanism Data Using Deep Encoders and Gaussian Processes

SAGE: Saliency-Guided Contrastive Embeddings

Structured Contrastive Learning for Interpretable Latent Representations

Eq.Bot: Enhance Robotic Manipulation Learning via Group Equivariant Canonicalization

Change-of-Basis Pruning via Rotational Invariance

Built with on top of