The field of large language models is moving towards more efficient and effective long-context inference. Recent developments focus on optimizing key-value cache management, attention mechanisms, and model architectures to reduce memory demand and improve performance. Researchers are exploring innovative approaches such as sparse indexing, adaptive modality-perception cache eviction, and lagged eviction mechanisms to address the challenges of long-context inference. These advancements have the potential to significantly improve the capabilities of large language models in tasks such as book summarization, question answering, and multimodal understanding. Notable papers in this area include Learn From the Past for Sparse Indexing, which achieves up to 22.8 times speedup over full attention, and LazyEviction, which reduces KV cache size by 50% while maintaining comparable accuracy. Additionally, MadaKV and LeoAM propose modality-adaptive cache eviction and adaptive hierarchical GPU-CPU-Disk KV management, respectively, to enhance the efficiency of multimodal large language models.