Efficient Model Compression and Data Curation

The field of deep learning is moving towards efficient model compression and data curation techniques to reduce computational costs and improve performance. Researchers are exploring innovative methods to prune neural networks, retain essential representations, and select informative samples. Noteworthy papers include:

  • Beyond One-Way Pruning: Bidirectional Pruning-Regrowth for Extreme Accuracy-Sparsity Tradeoff, which proposes a bidirectional pruning-regrowth strategy to mitigate the sharp accuracy drop under high sparsity conditions.
  • UNSEEN: Enhancing Dataset Pruning from a Generalization Perspective, which introduces a plug-and-play framework for dataset pruning from the perspective of generalization, achieving lossless performance while reducing training data by 30% on ImageNet-1K.
  • Weight Variance Amplifier Improves Accuracy in High-Sparsity One-Shot Pruning, which proposes a Variance Amplifying Regularizer to deliberately increase the variance of model parameters, promoting pruning robustness.
  • Teacher-Guided One-Shot Pruning via Context-Aware Knowledge Distillation, which introduces a novel teacher-guided pruning framework that tightly integrates Knowledge Distillation with importance score estimation, achieving high sparsity levels with minimal performance degradation.

Sources

Accuracy-Preserving CNN Pruning Method under Limited Data Availability

Beyond One-Way Pruning: Bidirectional Pruning-Regrowth for Extreme Accuracy-Sparsity Tradeoff

D$^{2}$-VPR: A Parameter-efficient Visual-foundation-model-based Visual Place Recognition Method via Knowledge Distillation and Deformable Aggregation

UNSEEN: Enhancing Dataset Pruning from a Generalization Perspective

Online Data Curation for Object Detection via Marginal Contributions to Dataset-level Average Precision

Weight Variance Amplifier Improves Accuracy in High-Sparsity One-Shot Pruning

Teacher-Guided One-Shot Pruning via Context-Aware Knowledge Distillation

Built with on top of