The field of computer architecture is witnessing a significant shift towards in-memory computing and AI-optimized hardware designs. Recent developments focus on overcoming the traditional von Neumann bottlenecks by integrating computational capabilities within memory units, thereby reducing data movement overhead and enhancing overall performance and energy efficiency. Notable advancements include the use of photonic and non-volatile memory technologies to achieve ultra-fast and low-power computing. Additionally, researchers are exploring novel circuit designs and architectures that facilitate seamless integration of processing and memory units. Furthermore, there is a growing emphasis on developing domain-specific hardware and software solutions tailored to the needs of AI workloads, including optimized interconnect protocols and error correction mechanisms. While several papers contributed to the advancement of the field, a few stood out for their innovative approaches. The X-pSRAM proposal introduced a novel photonic SRAM design that enables ultra-fast in-memory Boolean computation. The CMOS+X integration of amorphous oxide semiconductor transistors in capacitive, persistent memory topologies offered a promising alternative to traditional SRAM designs.
Advancements in In-Memory Computing and AI-Optimized Hardware
Sources
SPI-BoTER: Error Compensation for Industrial Robots via Sparse Attention Masking and Hybrid Loss with Spatial-Physical Information
Hardware-software co-exploration with racetrack memory based in-memory computing for CNN inference in embedded systems