Advances in Domain Adaptation and Few-Shot Learning

The field of machine learning is witnessing significant advancements in domain adaptation and few-shot learning. Researchers are exploring innovative methods to improve the performance of models in target domains without requiring large amounts of labeled data. One notable direction is the development of self-improvement methods for audio large language models, which leverage reinforcement learning optimization to enhance performance in specific target domains. Another area of focus is partial domain adaptation, where importance sampling-based shift correction methods are being proposed to characterize the latent structure and enhance the generalization ability of models. Few-shot learning is also seeing significant advancements, with methods such as dominant property mining and multiple semantic-guided context optimization being explored to improve the generalization ability of models to novel classes. Noteworthy papers in this area include: Self-Improvement for Audio Large Language Model using Unlabeled Speech, which proposes a self-improvement method called SI-SDA to enhance audio LLM performance. Beyond Class Tokens: LLM-guided Dominant Property Mining for Few-shot Classification, which explores dominating properties via contrastive learning to advance few-shot classification. MSGCoOp: Multiple Semantic-Guided Context Optimization for Few-Shot Learning, which leverages an ensemble of parallel learnable context vectors to capture diverse semantic aspects. Rethinking Few Shot CLIP Benchmarks: A Critical Analysis in the Inductive Setting, which proposes a pipeline that uses an unlearning technique to obtain true inductive baselines. From Entanglement to Alignment: Representation Space Decomposition for Unsupervised Time Series Domain Adaptation, which introduces a novel UDA framework with theoretical explainability that explicitly realizes UDA tasks from the perspective of representation space decomposition.

Sources

Self-Improvement for Audio Large Language Model using Unlabeled Speech

Partial Domain Adaptation via Importance Sampling-based Shift Correction

Beyond Class Tokens: LLM-guided Dominant Property Mining for Few-shot Classification

Rethinking Few Shot CLIP Benchmarks: A Critical Analysis in the Inductive Setting

From Entanglement to Alignment: Representation Space Decomposition for Unsupervised Time Series Domain Adaptation

MSGCoOp: Multiple Semantic-Guided Context Optimization for Few-Shot Learning

Prototype-Guided Pseudo-Labeling with Neighborhood-Aware Consistency for Unsupervised Adaptation

Transductive Model Selection under Prior Probability Shift

Ambiguity-Guided Learnable Distribution Calibration for Semi-Supervised Few-Shot Class-Incremental Learning

Multi-Prompt Progressive Alignment for Multi-Source Unsupervised Domain Adaptation

Built with on top of