The field of Tiny Machine Learning (TinyML) is rapidly evolving, with a growing focus on active learning techniques to improve model performance and efficiency on wearable devices. Researchers are exploring ways to adapt active learning methods to the TinyML context, where labeled data is scarce and computational resources are limited. This has led to the development of innovative algorithms that can select the most informative samples from a large quantity of unlabeled data, reducing the need for manual labeling and improving model accuracy.
One of the key challenges in active learning is balancing accuracy, calibration, and efficiency, particularly in the presence of label noise. Recent studies have investigated the impact of model size and architecture on the performance of vision transformers in active learning settings, providing valuable insights for practitioners working in resource-constrained environments.
The application of active learning to new domains, such as single-photon image classification, is also an exciting area of research. By leveraging active learning techniques, researchers can achieve high classification accuracy with significantly fewer labeled samples, opening up new possibilities for large-scale integration of single-photon data in real-world applications.
Noteworthy papers include: TActiLE, which proposes a novel active learning algorithm specifically designed for the TinyML context, demonstrating its effectiveness and efficiency through experiments on multiple image classification datasets. Label-efficient Single Photon Images Classification via Active Learning, which presents the first active learning framework for single-photon image classification, achieving high classification accuracy with significantly fewer labeled samples.