The field of sequence modeling is moving towards more efficient and scalable architectures, with a focus on improving the performance of large language models. Recent developments have centered around optimizing attention mechanisms, which are a key component of these models. Researchers are exploring new ways to reduce the computational complexity of attention, such as using linear attention, block-sparse attention, and attention caching. Additionally, there is a growing interest in developing more efficient and parallelizable recurrent neural network architectures, such as ParaRNN and MossNet. These advancements have the potential to significantly improve the performance and efficiency of large language models, enabling them to be used in a wider range of applications. Noteworthy papers include Sparser Block-Sparse Attention via Token Permutation, which proposes a novel method for increasing block-level sparsity in attention mechanisms, and Kimi Linear, which introduces a hybrid linear attention architecture that outperforms full attention in various scenarios.