The field of deep learning is moving towards more efficient and sustainable practices, with a focus on reducing computational costs and environmental impact. Researchers are exploring ways to optimize deep neural networks, such as identifying critical learning periods, determining optimal depth, and developing novel pruning techniques. These innovations have the potential to significantly accelerate training times, reduce energy consumption, and improve model accuracy. Noteworthy papers in this area include: One Period to Rule Them All, which introduces a systematic approach for identifying critical periods during training, reducing training time by up to 59.67% and CO2 emissions by 59.47%. Optimal Depth of Neural Networks, which proposes a theoretical framework for determining the optimal depth of a neural network, leading to significant gains in computational efficiency without compromising model accuracy. Loss-Aware Automatic Selection of Structured Pruning Criteria, which presents an efficient pruning technique that automatically selects the optimal pruning criteria and layer, reducing network FLOPs by 52% while improving top-1 accuracy.