The field of natural language processing is witnessing significant advancements in long-context language models and text embeddings. Recent developments are focused on improving the efficiency, accuracy, and interpretability of these models. Researchers are exploring novel approaches to attribute document contributions, enhance semantic textual similarity, and generate high-quality text embeddings. Additionally, there is a growing emphasis on evaluating and training contextual document embeddings, diagnosing multi-hop reasoning failures, and developing controllable examination frameworks for long-context language models. These innovations have far-reaching implications for various applications, including text summarization, question answering, and machine translation. Noteworthy papers in this area include Document Valuation in LLM Summaries: A Cluster Shapley Approach, which proposes an efficient algorithm for valuing individual documents used in LLM-generated summaries, and GEM: Empowering LLM for both Embedding Generation and Language Understanding, which enables large decoder-only language models to generate high-quality text embeddings while maintaining their original text generation and reasoning capabilities. Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation Models is also significant, as it introduces a series of models that achieve state-of-the-art results in text embedding and reranking tasks.