The field of neural networks is moving towards more efficient and compressed models, with a focus on reducing memory and computational costs without sacrificing accuracy. Researchers are exploring various techniques, such as progressive depth expansion, low-rank decomposition, and structured sparsity, to achieve this goal. These methods have shown promising results, with some approaches achieving significant reductions in memory footprint and computational costs while maintaining competitive accuracy. Notably, some papers have demonstrated the effectiveness of combining multiple compression techniques, such as Vanishing Contributions, which provides a generalizable method for smoothly transitioning neural models into compressed form. Other noteworthy papers include Optimally Deep Networks, which adapts model depth to datasets for superior efficiency, and D-com, which accelerates iterative processing to enable low-rank decomposition of activations. Additionally, Real-Time Neural Video Compression with Unified Intra and Inter Coding has shown impressive results in neural video compression, outperforming existing schemes by an average of 10.7% BD-rate reduction. Overall, the field is advancing towards more efficient and compressed neural networks, with a focus on innovative techniques and combinations of existing methods.