The field of machine learning and circuit modelling is moving towards more efficient and innovative solutions. Researchers are exploring novel model architectures and hardware-aware design to improve performance and reduce latency. One of the key directions is the development of entirely multiplication-free models, which have shown impressive results in terms of accuracy and resource usage. However, these models often suffer from high computational cost during training and limited generalizability. To address these challenges, new methods and frameworks are being proposed, such as gradient-based approaches for learning optimal combinations of low-level logic gates and declarative modelling interfaces for radio frequency circuits. These advancements have the potential to enable more efficient deployment on modern FPGAs and real-time science applications. Noteworthy papers include:
- WARP-LUTs, which achieves significantly faster convergence on CIFAR-10 compared to existing methods, and
- ParamRF, which provides an easy-to-use and efficient framework for parametric modelling of radio frequency circuits.