Advances in Feature Selection and Unsupervised Learning

The field of machine learning is moving towards more efficient and effective methods for feature selection and unsupervised learning. Recent developments have focused on improving the accuracy and robustness of these methods, particularly in scenarios with limited labeled data. Notably, innovative approaches such as mutual information neural estimation and hybrid frameworks combining untrained and pre-trained networks have shown promising results. Additionally, unsupervised active learning methods have been proposed to reduce the annotation burden and improve model performance. These advancements have significant implications for various applications, including image denoising, semantic segmentation, and object discovery.

Some noteworthy papers include: MINERVA, which introduces a novel approach to supervised feature selection based on neural estimation of mutual information. Net2Net, which presents a hybrid framework for real-world denoising by combining untrained and pre-trained networks. Training-Free Out-Of-Distribution Segmentation With Foundation Models, which proposes a simple and effective method for detecting out-of-distribution regions in semantic segmentation. Calibrated Uncertainty Sampling for Active Learning, which introduces a new acquisition function to improve the calibration of uncertainty models in active learning. Semantic-Inductive Attribute Selection for Zero-Shot Learning, which presents a partitioning scheme to refine semantic spaces and improve zero-shot learning performance. Towards Multimodal Active Learning, which develops a framework for multimodal active learning with unaligned data. Unsupervised Transformer Pre-Training for Images, which examines the core ideas behind self-supervised learning approaches and their applications. Unsupervised Active Learning via Natural Feature Progressive Framework, which proposes a novel framework for unsupervised active learning. DADO, which introduces a depth-attention framework for object discovery. Resolution scaling governs DINOv3 transfer performance in chest radiograph classification, which investigates the impact of resolution scaling on the transfer performance of self-supervised learning models in chest radiography.

Sources

MINERVA: Mutual Information Neural Estimation for Supervised Feature Selection

Net2Net: When Un-trained Meets Pre-trained Networks for Robust Real-World Denoising

Training-Free Out-Of-Distribution Segmentation With Foundation Models

Calibrated Uncertainty Sampling for Active Learning

Semantic-Inductive Attribute Selection for Zero-Shot Learning

Towards Multimodal Active Learning: Efficient Learning with Limited Paired Data

Pilot selection in the era of Virtual reality: algorithms for accurate and interpretable machine learning models

Unsupervised Transformer Pre-Training for Images: Self-Distillation, Mean Teachers, and Random Crops

Unsupervised Active Learning via Natural Feature Progressive Framework

DADO: A Depth-Attention framework for Object Discovery

Resolution scaling governs DINOv3 transfer performance in chest radiograph classification

Built with on top of