The field of machine learning is moving towards developing more secure and efficient methods for protecting intellectual property and ensuring the privacy of sensitive data. Researchers are exploring novel approaches to watermarking and data minimization, which are essential for preventing unauthorized use and ensuring compliance with regulations. Notably, the development of new watermarking techniques and frameworks for data minimization is advancing the field, enabling more effective protection of machine learning models and data. Some papers are particularly noteworthy, including the proposal of a novel watermarking method for Kolmogorov-Arnold Networks, which demonstrates superior robustness against various watermark removal attacks. The introduction of a privacy-by-design pipeline for social media data in AI research is also a significant contribution, providing a compliance framework that embeds legal safeguards directly into extended ETL pipelines. Additionally, the development of a bi-level optimization framework for steering machine unlearning with digital watermarking is a promising approach for enhancing trust and regulatory compliance in machine learning.