The field of large language model (LLM) inference is rapidly advancing, with a focus on improving efficiency and reducing latency. Recent developments have centered around speculative decoding, which leverages smaller draft models to propose tokens for verification by larger target models. This approach has shown significant promise in accelerating inference, with various techniques being proposed to optimize the process, such as adaptive rescheduling, dynamic speculative sampling, and fairness-aware batch formation. Notably, the use of intermediate qualifier models, context-dependent dynamic shortlisting, and parallel heterogeneous execution have been explored to further enhance speculative decoding efficiency. These innovations have the potential to significantly improve the performance of LLMs, enabling faster and more accurate inference. Noteworthy papers include: Conformal Sparsification for Bandwidth-Efficient Edge-Cloud Speculative Decoding, which proposes a sparse quantize-and-sample framework to improve end-to-end latency and rejection rates. Mirror Speculative Decoding: Breaking the Serial Barrier in LLM Inference, which launches branch-complete rollouts from early-exit signals in parallel with the target model's suffix to exploit cross-device parallelism.