The field of deep learning is moving towards efficient model compression and data curation techniques to reduce computational costs and improve performance. Researchers are exploring innovative methods to prune neural networks, retain essential representations, and select informative samples. Noteworthy papers include:
- Beyond One-Way Pruning: Bidirectional Pruning-Regrowth for Extreme Accuracy-Sparsity Tradeoff, which proposes a bidirectional pruning-regrowth strategy to mitigate the sharp accuracy drop under high sparsity conditions.
- UNSEEN: Enhancing Dataset Pruning from a Generalization Perspective, which introduces a plug-and-play framework for dataset pruning from the perspective of generalization, achieving lossless performance while reducing training data by 30% on ImageNet-1K.
- Weight Variance Amplifier Improves Accuracy in High-Sparsity One-Shot Pruning, which proposes a Variance Amplifying Regularizer to deliberately increase the variance of model parameters, promoting pruning robustness.
- Teacher-Guided One-Shot Pruning via Context-Aware Knowledge Distillation, which introduces a novel teacher-guided pruning framework that tightly integrates Knowledge Distillation with importance score estimation, achieving high sparsity levels with minimal performance degradation.