The field of Mixture-of-Experts (MoE) models is rapidly evolving, with a focus on improving scalability, efficiency, and performance. Recent developments have centered around designing novel training frameworks, routing mechanisms, and scaling laws to unlock the full potential of MoE models. Notably, researchers have been exploring ways to enable elastic inference-time expert utilization, allowing models to adapt to varying computational budgets. Additionally, there is a growing interest in understanding the internal mechanisms of MoE models, including expert-level behaviors and routing dynamics. These advancements have led to significant improvements in model performance, efficiency, and robustness. Some noteworthy papers in this regard include Elastic MoE, which introduces a novel training framework for scalable MoE models, and Dynamic Experts Search, which proposes a test-time scaling strategy for enhancing reasoning in MoE models. Other notable works, such as Towards a Comprehensive Scaling Law of Mixture-of-Experts and Bayesian Mixture-of-Experts, have made important contributions to our understanding of MoE models and their potential applications.