Accelerating AI with Specialized Hardware

The field of AI research is moving towards the development of specialized hardware accelerators to improve the efficiency and performance of AI systems. This trend is driven by the need to reduce the energy requirements of AI systems and to increase their processing speed. Recent research has focused on the development of novel architectures and hardware accelerators, such as vector processors and tensor manipulation units, that are designed to simulate spiking neural networks and accelerate tensor computations. These innovations have the potential to significantly advance the field of AI and enable the deployment of AI systems in a wide range of applications. Notable papers in this area include FeNN, which presents a RISC-V-based soft vector processor for simulating spiking neural networks on FPGAs, and TMU, which introduces a reconfigurable, near-memory tensor manipulation unit for high-throughput AI SoCs. Other notable papers include J3DAI, which presents a tiny DNN-based edge AI accelerator for 3D-stacked CMOS image sensors, and Acore-CIM, which proposes a self-calibrated mixed-signal CIM accelerator SoC with RISC-V controlled on-chip calibration.

Sources

FeNN: A RISC-V vector processor for Spiking Neural Network acceleration

Tensor Manipulation Unit (TMU): Reconfigurable, Near-Memory Tensor Manipulation for High-Throughput AI SoC

J3DAI: A tiny DNN-Based Edge AI Accelerator for 3D-Stacked CMOS Image Sensor

RISC-V for HPC: An update of where we are and main action points

Side-Channel Extraction of Dataflow AI Accelerator Hardware Parameters

Exploring Fast Fourier Transforms on the Tenstorrent Wormhole

Acore-CIM: build accurate and reliable mixed-signal CIM cores with RISC-V controlled self-calibration

Built with on top of