The field of natural language processing is witnessing significant advancements in language model training and in-context learning. Researchers are exploring innovative approaches to improve the efficiency and effectiveness of these models, including the development of new loss functions, prompt tuning methods, and demonstration selection strategies. One notable direction is the focus on ordinal word-in-context classification, which has led to the proposal of unified frameworks for treating binary and ordinal tasks. Additionally, there is a growing interest in task-agnostic continual learning, which enables models to adapt to new tasks without requiring task-specific prompts. The use of subsets of interest (SOI) for analyzing training dynamics has also shown promise in improving model performance. Furthermore, researchers are investigating ways to optimize compute resources for many-shot in-context learning, including the selection of demonstrations and the use of caching mechanisms. Noteworthy papers include XL-DURel, which proposes a finetuned Sentence Transformer model for ordinal Word-in-Context classification, and GRID, which introduces a unified framework for task-agnostic continual prompt tuning. TOC-UCO provides a comprehensive repository of tabular ordinal classification datasets, while TDR proposes a novel framework for task-decoupled retrieval with fine-grained LLM feedback for in-context learning.