The field of natural language processing is witnessing significant developments in text reranking and retrieval-augmented generation. Researchers are exploring innovative approaches to improve the effectiveness and efficiency of these systems. One notable direction is the integration of supervised fine-tuning and reinforcement learning to enhance the relevance and ranking of documents. Another area of focus is the development of parametric models that can encode legal knowledge into vectors, alleviating the need for excessive context windows and improving performance on downstream tasks. Furthermore, there is a growing interest in designing systems that can balance relevance and recency for temporal information retrieval, as well as methods that can detect confabulations in large language models. Noteworthy papers in this area include ERank, which proposes a novel two-stage training pipeline for text reranking, and DistilledPRAG, which introduces a knowledge-distilled parametric RAG model for privacy-preserving reasoning. Re3 is also a notable work, presenting a unified framework for balancing relevance and recency in temporal information retrieval. Additionally, MeVe offers a modular system for memory verification and effective context control in language models, and PL-CA proposes a parametric legal case augmentation framework. These advancements have the potential to significantly improve the performance and reliability of text reranking and retrieval-augmented generation systems.