The field of large language models is moving towards improving efficiency and accuracy in reasoning tasks. Researchers are exploring innovative methods to reduce computational costs and improve model performance, such as sparse attention mechanisms, layer skipping, and uncertainty quantification. Noteworthy papers include: ProxRouter, which improves robustness to outliers in nonparametric routers. DELTA, a training-free sparse attention mechanism that achieves computational efficiency without sacrificing model accuracy. Trace Length is a Simple Uncertainty Signal in Reasoning Models, which establishes trace length as a practical confidence measure for large reasoning models. Tracing the Traces, which introduces Latent-Trajectory signals to predict solution accuracy and improve inference-time efficiency. APCE, a context-aware solution to reduce memory footprint and mitigate ContextRot effects in long-context processing. NOSA, a trainable sparse attention framework that enables KV cache offloading and improves decoding throughput. LiteStage, a latency-aware layer skipping framework for multi-stage reasoning that balances efficiency and accuracy.