The field of edge AI accelerators is moving towards the development of Processing-in-Memory (PIM) architectures that aim to address the von Neumann bottleneck. Researchers are exploring innovative ways to integrate memory and computing units, enabling massively parallel multiply-and-accumulate operations and improving computational throughput and energy efficiency. Notable advancements include the design of hybrid memory cells, resource-shared digital PIM units, and approximate adders that balance performance, accuracy, and energy efficiency. These developments have the potential to enable scalable, energy-efficient computing methods for next-generation AI accelerators and general-purpose processors. Noteworthy papers include: NVM-in-Cache, which proposes a novel compute-on-powerline scheme for integrating resistive RAM devices into conventional 6T-SRAM cells, achieving high computational throughput and energy efficiency. UPMEM Unleashed, which reveals surprising inefficiencies in the UPMEM software stack and demonstrates significant speedups through simple modifications to the assembly generated by the UPMEM compiler. Res-DPU, which introduces a resource-shared digital PIM unit with a dual-port 5T SRAM latch and shared 2T AND compute logic, reducing transistor count and power consumption. HALOC-AxA, which presents a novel approximate adder that is more energy- and area-efficient than existing adders, while achieving improved or comparable accuracy.