Advances in In-Context Learning

The field of in-context learning is rapidly evolving, with a focus on improving the performance and generalization capabilities of large language models. Recent studies have explored the use of task vectors, demonstration selection, and explanation-based approaches to enhance the robustness and adaptability of in-context learning. These innovations have shown promising results, with improvements in task performance, out-of-distribution generalization, and reduced sensitivity to demonstration order and context length constraints. Noteworthy papers in this area include One Task Vector is not Enough, which introduces a novel dataset and reveals the importance of intermediate layers and distributed task knowledge representation, and Exploring Explanations Improves the Robustness of In-Context Learning, which extends X-ICL by systematically exploring explanations for all possible labels. Additionally, Adaptive Task Vectors for Large Language Models proposes a simple and effective framework that dynamically generates task vectors conditioned on each input query, demonstrating strong performance and generalization capabilities.

Sources

One Task Vector is not Enough: A Large-Scale Study for In-Context Learning

CrossICL: Cross-Task In-Context Learning via Unsupervised Demonstration Transfer

Exploring Explanations Improves the Robustness of In-Context Learning

Non-Asymptotic Length Generalization

Adaptive Task Vectors for Large Language Models

Delta-KNN: Improving Demonstration Selection in In-Context Learning for Alzheimer's Disease Detection

ConText: Driving In-context Learning for Text Removal and Segmentation

Selecting Demonstrations for Many-Shot In-Context Learning via Gradient Matching

Transformers Meet In-Context Learning: A Universal Approximation Theory

Built with on top of