The field of Mixture-of-Experts (MoE) models is moving towards more efficient scaling methods, with a focus on reducing computational and memory overhead. Recent developments have introduced innovative techniques such as static quantization, dynamic expert pruning, and expert merging to achieve extreme compression with minimal accuracy loss. These advancements have the potential to significantly improve the deployment of MoE-based models, enabling more efficient and scalable solutions. Noteworthy papers in this area include: MC# which achieves a 6.2 times weight reduction at 2.57 average bits with only a 1.7% accuracy drop across five multimodal benchmarks. REAP the Experts which demonstrates that expert pruning is a superior strategy for generative tasks and achieves near-lossless compression on code generation and tool-calling tasks.