The field of large language models is moving towards developing more efficient and reliable models. Researchers are exploring new methods to reduce computational resources required for inference while preserving model safety and performance. One key direction is the development of dynamic pruning methods that can adaptively preserve alignment-relevant circuits during inference. Another area of focus is uncertainty-aware quantization, which reframes low-bit quantization as risk minimization. Additionally, there is a growing interest in post-hoc uncertainty estimation frameworks for fine-tuned large language models. Notable papers include: Alignment-Constrained Dynamic Pruning for LLMs, which introduces Alignment-Aware Probe Pruning to improve refusal rates by 50% at matched compute. BayesQ, an uncertainty-guided post-training quantization framework, improves over strong PTQ baselines on ResNet-50 and BERT-base. Bayesian Mixture of Experts For Large Language Models presents a post-hoc uncertainty estimation framework for fine-tuned large language models based on Mixture-of-Experts architectures.