The field of neural networks is moving towards more efficient deployment and quantization methods. Recent developments have focused on improving the accuracy and speed of quantized neural networks, as well as reducing the memory and computational requirements. Notably, researchers have proposed novel quantization schemes, such as adaptive distribution-aware quantization and bit-shifting quantization, which have shown promising results in achieving high accuracy with low precision weights and activations. Additionally, hardware-software co-design approaches have been explored to optimize the deployment of neural networks on resource-constrained devices. Overall, the field is advancing towards more efficient and scalable neural network deployment, enabling wider adoption in real-world applications.
Noteworthy papers include: SpikeFit, which introduces a novel training method for Spiking Neural Networks that enables efficient inference on neuromorphic hardware. MetaCluster, which proposes a framework for compressing Kolmogorov-Arnold Networks without sacrificing accuracy. AccuQuant, which presents a novel post-training quantization method for diffusion models that simulates multiple denoising steps to alleviate error accumulation.