Efficient Machine Learning Architectures and Techniques

The field of machine learning is moving towards the development of more efficient architectures and techniques, particularly in the context of edge devices and resource-constrained environments. Researchers are exploring novel approaches to reduce computational overhead, memory usage, and energy consumption, while maintaining or even improving model performance. One key direction is the design of hybrid architectures that combine different techniques, such as unary-binary designs, continual learning methods, and predictive coding-based fine-tuning. These approaches have shown promising results in reducing area and power consumption, alleviating plasticity loss, and enabling efficient domain adaptation. Notable papers include:

  • CBPNet, which proposes a continual backpropagation prompt network to restore learning vitality in frozen pretrained models, achieving state-of-the-art accuracy on multiple benchmarks.
  • NeuCODEX, which introduces a neuromorphic co-inference architecture that reduces data transfer and edge energy consumption by optimizing spatial and temporal redundancy.
  • Theory of periodic convolutional neural network, which establishes a rigorous approximation theorem and highlights the expressive power of periodic CNNs for problems with ridge-like structure.

Sources

Hybrid unary-binary design for multiplier-less printed Machine Learning classifiers

CBPNet: A Continual Backpropagation Prompt Network for Alleviating Plasticity Loss on Edge Devices

Theory of periodic convolutional neural network

NeuCODEX: Edge-Cloud Co-Inference with Spike-Driven Compression and Dynamic Early-Exit

Predictive Coding-based Deep Neural Network Fine-tuning for Computationally Efficient Domain Adaptation

Built with on top of