Efficient Deep Learning Architectures

The field of deep learning is moving towards more efficient architectures, with a focus on reducing parameters and energy consumption without sacrificing accuracy. Recent developments have introduced innovative mechanisms such as dynamic scaling, adaptive batch scheduling, and decomposed hyperdimensional computing. These advancements enable the deployment of deep learning models on resource-constrained edge devices and improve their robustness to noise and variations. Noteworthy papers include FastBoost, which achieves state-of-the-art performance on CIFAR benchmarks with a 2.1 times parameter reduction, and DecoHD, which compresses hyperdimensional classifiers with minimal accuracy degradation. Additionally, LogHD introduces a logarithmic class-axis reduction, cutting memory requirements while preserving accuracy and robustness.

Sources

FastBoost: Progressive Attention with Dynamic Scaling for Efficient Deep Learning

Energy-Efficient Deep Learning Without Backpropagation: A Rigorous Evaluation of Forward-Only Algorithms

One Size Does Not Fit All: Architecture-Aware Adaptive Batch Scheduling with DEBA

DecoHD: Decomposed Hyperdimensional Classification under Extreme Memory Budgets

LogHD: Robust Compression of Hyperdimensional Classifiers via Logarithmic Class-Axis Reduction

Built with on top of