Energy Efficiency in Machine Learning and Neuromorphic Computing

The field of machine learning and neuromorphic computing is moving towards energy efficiency, with a focus on developing optimized libraries, frameworks, and architectures that reduce power consumption without sacrificing performance. Researchers are exploring novel approaches such as spiking neural networks, distributed neural networks, and mixed-precision techniques to achieve ultra-low power consumption. These advancements have the potential to enable the deployment of machine learning models on resource-constrained devices, such as wearable nodes and edge devices, and to improve the overall efficiency of computationally restricted systems. Noteworthy papers in this area include EmbeddedML, which introduces a training-time-optimized and mathematically enhanced machine learning library, and Spiking Vocos, which proposes a novel spiking neural vocoder with ultra-low energy consumption. Additionally, papers like NEURAL and MaRVIn present innovative architectures and frameworks that support energy-efficient execution of spiking neural networks and mixed-precision deep neural networks, respectively.

Sources

EmbeddedML: A New Optimized and Fast Machine Learning Library

Drone Detection Using a Low-Power Neuromorphic Virtual Tripwire

Spiking Vocos: An Energy-Efficient Neural Vocoder

Design-Space Exploration of Distributed Neural Networks in Low-Power Wearable Nodes

HD3C: Efficient Medical Data Classification for Embedded Devices

NEURAL: An Elastic Neuromorphic Architecture with Hybrid Data-Event Execution and On-the-fly Attention Dataflow

MaRVIn: A Cross-Layer Mixed-Precision RISC-V Framework for DNN Inference, from ISA Extension to Hardware Acceleration

Built with on top of