The field of machine learning is moving towards developing more efficient and robust training pipelines, with a focus on trustworthiness and reliability. Researchers are exploring innovative methods to improve the sustainability of robust training pipelines, including automatic perturbation size selection and verification time reduction. Additionally, there is a growing interest in coreset selection methods that can maintain or even improve model performance while reducing training data size. These methods are being applied to various domains, including network intrusion detection and medical imaging, and are showing promise in improving data efficiency and generalization. Moreover, analog computing is being investigated as a means to overcome data movement bottlenecks and enhance computational density. Noteworthy papers in this area include: The paper On the Efficiency of Training Robust Decision Trees, which investigates the efficiency of each step in a simple pipeline for training adversarially robust decision trees. The paper Fault-Free Analog Computing with Imperfect Hardware, which introduces a fault-free matrix representation that enables mathematical optimization to bypass faulty devices and eliminate differential pairs, significantly enhancing computational density. The paper Trustworthy Tree-based Machine Learning by $MoS_2$ Flash-based Analog CAM with Inherent Soft Boundaries, which presents a novel hardware-software co-design approach using $MoS_2$ Flash-based analog CAM with inherent soft boundaries, enabling efficient inference with soft tree-based models.