Advances in Unsupervised Learning and Representation

The field of unsupervised learning is moving towards more robust and scalable methods, with a focus on variational inference and self-supervised learning. The use of variational autoencoders (VAEs) and related techniques is becoming increasingly popular, with applications in areas such as clustering, representation learning, and symmetry discovery. Notably, researchers are exploring ways to improve the robustness of these methods to noise and missing data, as well as developing new techniques for learning transferable representations without generative reconstruction. Additionally, there is a growing interest in exploiting structural assumptions, such as axis-aligned subspaces, to facilitate efficient optimization in high-dimensional problems. Overall, these developments have the potential to significantly advance the field of unsupervised learning and representation. Noteworthy papers include:

  • Scalable Robust Bayesian Co-Clustering with Compositional ELBOs, which presents a fully variational co-clustering framework that directly learns row and column clusters in the latent space.
  • Variational Self-Supervised Learning, which introduces a novel framework that combines variational inference with self-supervised learning to enable efficient, decoder-free representation learning.
  • Learning symmetries in datasets, which investigates how symmetries present in datasets affect the structure of the latent space learned by VAEs.

Sources

Scalable Robust Bayesian Co-Clustering with Compositional ELBOs

Variational Self-Supervised Learning

Dual Consistent Constraint via Disentangled Consistency and Complementarity for Multi-view Clustering

Learning symmetries in datasets

Leveraging Axis-Aligned Subspaces for High-Dimensional Bayesian Optimization with Group Testing

Built with on top of