Efficient Machine Learning and Circuit Modelling

The field of machine learning and circuit modelling is moving towards more efficient and innovative solutions. Researchers are exploring novel model architectures and hardware-aware design to improve performance and reduce latency. One of the key directions is the development of entirely multiplication-free models, which have shown impressive results in terms of accuracy and resource usage. However, these models often suffer from high computational cost during training and limited generalizability. To address these challenges, new methods and frameworks are being proposed, such as gradient-based approaches for learning optimal combinations of low-level logic gates and declarative modelling interfaces for radio frequency circuits. These advancements have the potential to enable more efficient deployment on modern FPGAs and real-time science applications. Noteworthy papers include:

  • WARP-LUTs, which achieves significantly faster convergence on CIFAR-10 compared to existing methods, and
  • ParamRF, which provides an easy-to-use and efficient framework for parametric modelling of radio frequency circuits.

Sources

WARP-LUTs - Walsh-Assisted Relaxation for Probabilistic Look Up Tables

ParamRF: A JAX-native Framework for Declarative Circuit Modelling

Impl\'ementation Efficiente de Fonctions de Convolution sur FPGA \`a l'Aide de Blocs Param\'etrables et d'Approximations Polynomiales

Exact Nearest-Neighbor Search on Energy-Efficient FPGA Devices

Fast Marker Detection for UV-Based Visual Relative Localisation in Agile UAV Swarms

Built with on top of