Advances in Secure and Efficient Machine Learning

The field of machine learning is moving towards developing more secure and efficient methods for protecting intellectual property and ensuring the privacy of sensitive data. Researchers are exploring novel approaches to watermarking and data minimization, which are essential for preventing unauthorized use and ensuring compliance with regulations. Notably, the development of new watermarking techniques and frameworks for data minimization is advancing the field, enabling more effective protection of machine learning models and data. Some papers are particularly noteworthy, including the proposal of a novel watermarking method for Kolmogorov-Arnold Networks, which demonstrates superior robustness against various watermark removal attacks. The introduction of a privacy-by-design pipeline for social media data in AI research is also a significant contribution, providing a compliance framework that embeds legal safeguards directly into extended ETL pipelines. Additionally, the development of a bi-level optimization framework for steering machine unlearning with digital watermarking is a promising approach for enhancing trust and regulatory compliance in machine learning.

Sources

Iris RESTful Server and IrisTileSource: An Iris implementation for existing OpenSeaDragon viewers

Watermarking Kolmogorov-Arnold Networks for Emerging Networked Applications via Activation Perturbation

A DICOM Image De-identification Algorithm in the MIDI-B Challenge

Towards Unveiling Predictive Uncertainty Vulnerabilities in the Context of the Right to Be Forgotten

Learning Generalizable and Efficient Image Watermarking via Hierarchical Two-Stage Optimization

PETLP: A Privacy-by-Design Pipeline for Social Media Data in AI Research

Invisible Watermarks, Visible Gains: Steering Machine Unlearning with Bi-Level Watermarking Design

SoK: Data Minimization in Machine Learning

Built with on top of