Edge AI Acceleration and Neuromorphic Computing

The field of edge AI and neuromorphic computing is moving towards the development of resource-efficient and low-latency acceleration engines. Researchers are exploring innovative architectures and designs to improve the performance and efficiency of spiking neural networks and other edge AI applications. A key direction is the use of modular and performance-optimised components, such as CORDIC-based neuron models, to achieve biologically accurate and low-resource implementations. Another important trend is the development of general-purpose accelerators that can exploit fine-grain parallelism on dependency-bound kernels, enabling significant speedups and energy efficiency improvements. Noteworthy papers in this area include: ReLACE, which presents a resource-efficient low-latency cortical acceleration engine with improved performance and accuracy. Insum, which proposes a new approach for expressing sparse computations and achieves significant speedups and code reduction. A Multi-Threading Kernel, which enables neuromorphic edge applications with improved speed and energy efficiency. Squire, which is a general-purpose accelerator designed to exploit fine-grain parallelism on dependency-bound kernels and achieves significant speedups and energy efficiency improvements.

Sources

ReLACE: A Resource-Efficient Low-Latency Cortical Acceleration Engine

Insum: Sparse GPU Kernels Simplified and Optimized with Indirect Einsums

A Multi-Threading Kernel for Enabling Neuromorphic Edge Applications

Squire: A General-Purpose Accelerator to Exploit Fine-Grain Parallelism on Dependency-Bound Kernels

Built with on top of