Advancements in Multi-View Learning and Edge Intelligence

The field of machine learning is witnessing significant developments in multi-view learning and edge intelligence. Researchers are exploring innovative approaches to learn from multiple views of data, which has led to improved performance in various tasks such as contrastive learning and edge detection. A notable direction in this area is the use of multi-view mid fusion techniques, which have shown great promise in handling high-dimensional low-sample-size settings. Another important trend is the development of principled frameworks for multi-view contrastive learning, which are designed to effectively leverage multiple views of data and mitigate limitations of existing methods. Furthermore, there is a growing interest in deploying machine learning models on edge devices, which requires adapting to resource-constrained settings and balancing model performance with energy efficiency. Noteworthy papers in this area include:

  • A Principled Framework for Multi-View Contrastive Learning, which introduces novel loss functions for multi-view contrastive learning and demonstrates their effectiveness in various tasks.
  • Edge-Boundary-Texture Loss, which proposes a novel loss function for edge detection that explicitly divides pixels into three categories and assigns distinct supervisory weights.
  • Self-Supervised Learning at the Edge, which explores the feasibility and efficiency of self-supervised learning techniques for edge-based learning and evaluates their effectiveness in learning robust representations under resource-constrained settings.

Sources

Multi-view mid fusion: a universal approach for learning in an HDLSS setting

Divergence-Based Similarity Function for Multi-View Contrastive Learning

Edge-Boundary-Texture Loss: A Tri-Class Generalization of Weighted Binary Cross-Entropy for Enhanced Edge Detection

A Principled Framework for Multi-View Contrastive Learning

On-Device Training of PV Power Forecasting Models in a Smart Meter for Grid Edge Intelligence

Self-Supervised Learning at the Edge: The Cost of Labeling

Learning from positive and unlabeled examples -Finite size sample bounds

Built with on top of