Efficient Deep Learning

The field of deep learning is moving towards more efficient and sustainable practices, with a focus on reducing computational costs and environmental impact. Researchers are exploring ways to optimize deep neural networks, such as identifying critical learning periods, determining optimal depth, and developing novel pruning techniques. These innovations have the potential to significantly accelerate training times, reduce energy consumption, and improve model accuracy. Noteworthy papers in this area include: One Period to Rule Them All, which introduces a systematic approach for identifying critical periods during training, reducing training time by up to 59.67% and CO2 emissions by 59.47%. Optimal Depth of Neural Networks, which proposes a theoretical framework for determining the optimal depth of a neural network, leading to significant gains in computational efficiency without compromising model accuracy. Loss-Aware Automatic Selection of Structured Pruning Criteria, which presents an efficient pruning technique that automatically selects the optimal pruning criteria and layer, reducing network FLOPs by 52% while improving top-1 accuracy.

Sources

One Period to Rule Them All: Identifying Critical Learning Periods in Deep Networks

Knee-Deep in C-RASP: A Transformer Depth Hierarchy

Optimal Depth of Neural Networks

On the algorithmic construction of deep ReLU networks

Loss-Aware Automatic Selection of Structured Pruning Criteria for Deep Neural Network Acceleration

Linearity-based neural network compression

Towards an Optimal Control Perspective of ResNet Training

Built with on top of