The field of natural language processing is moving towards more efficient attention mechanisms for large language models. Recent research has focused on reducing the computational complexity of attention mechanisms, which is a major bottleneck in scaling up these models. Various approaches have been proposed, including sparse attention, sub-quadratic attention, and basis decomposition. These innovations have led to significant improvements in performance and efficiency, enabling the deployment of large language models on resource-constrained devices. Noteworthy papers in this area include ProxyAttn, which achieves up to 10.3x attention acceleration and 2.4x prefilling acceleration without significant performance loss. Another notable paper is Accelerating Attention with Basis Decomposition, which presents a lossless algorithmic reformulation of attention that achieves 32% faster key/value projections and 25% smaller weights. Sparse Query Attention is also a promising approach that reduces the computational complexity of attention mechanisms by decreasing the number of query heads, resulting in significant throughput improvements of up to 3x.