The field of deep learning is moving towards more efficient architectures, with a focus on reducing parameters and energy consumption without sacrificing accuracy. Recent developments have introduced innovative mechanisms such as dynamic scaling, adaptive batch scheduling, and decomposed hyperdimensional computing. These advancements enable the deployment of deep learning models on resource-constrained edge devices and improve their robustness to noise and variations. Noteworthy papers include FastBoost, which achieves state-of-the-art performance on CIFAR benchmarks with a 2.1 times parameter reduction, and DecoHD, which compresses hyperdimensional classifiers with minimal accuracy degradation. Additionally, LogHD introduces a logarithmic class-axis reduction, cutting memory requirements while preserving accuracy and robustness.