The field of natural language processing is moving towards developing more context-aware language models. Researchers are exploring innovative methods to improve the faithfulness of large language models in context-dependent scenarios. One notable direction is the investigation of expert specialization in mixture-of-experts architectures, which has led to the development of targeted optimization approaches for enhanced context grounding. Another area of focus is the creation of more diverse and comprehensive news retrieval systems, which can provide users with a broader understanding of real-world events. Additionally, there is a growing interest in improving the accuracy and diversity of multi-hop question generation, as well as the development of more effective exemplar selection strategies for in-context learning. Noteworthy papers in this area include: Understanding and Leveraging the Expert Specialization of Context Faithfulness in Mixture-of-Experts LLMs, which proposes a method for identifying and fine-tuning context-faithful experts. Uncovering the Bigger Picture: Comprehensive Event Understanding Via Diverse News Retrieval, which introduces a framework for diverse news retrieval that enhances event coverage by modeling semantic variation at the sentence level. KCS: Diversify Multi-hop Question Generation with Knowledge Composition Sampling, which presents a novel framework for expanding the diversity of generated multi-hop questions. STARE at the Structure: Steering ICL Exemplar Selection with Structural Alignment, which proposes a two-stage exemplar selection strategy that balances efficiency, generalizability, and performance. InSQuAD: In-Context Learning for Efficient Retrieval via Submodular Mutual Information to Enforce Quality and Diversity, which introduces a unified selection strategy based on submodular mutual information to enforce quality and diversity among in-context exemplars.