The field of AI research is moving towards the development of specialized hardware accelerators to improve the efficiency and performance of AI systems. This trend is driven by the need to reduce the energy requirements of AI systems and to increase their processing speed. Recent research has focused on the development of novel architectures and hardware accelerators, such as vector processors and tensor manipulation units, that are designed to simulate spiking neural networks and accelerate tensor computations. These innovations have the potential to significantly advance the field of AI and enable the deployment of AI systems in a wide range of applications. Notable papers in this area include FeNN, which presents a RISC-V-based soft vector processor for simulating spiking neural networks on FPGAs, and TMU, which introduces a reconfigurable, near-memory tensor manipulation unit for high-throughput AI SoCs. Other notable papers include J3DAI, which presents a tiny DNN-based edge AI accelerator for 3D-stacked CMOS image sensors, and Acore-CIM, which proposes a self-calibrated mixed-signal CIM accelerator SoC with RISC-V controlled on-chip calibration.