Advances in Secure Model Sharing and Data Privacy

The field of machine learning and data privacy is moving towards developing innovative solutions to protect sensitive information and prevent unauthorized use. Researchers are focusing on creating mechanisms to prevent model merging, protect dataset ownership, and obscure biometric data in medical images. Notable papers in this area include: Model Unmerging: Making Your Models Unmergeable for Secure Model Sharing, which proposes a method to disrupt model parameters and prevent unauthorized merging. Exposing Privacy Risks in Anonymizing Clinical Data: Combinatorial Refinement Attacks on k-Anonymity Without Auxiliary Information, which introduces a new class of privacy attacks targeting k-anonymized datasets. An Automated, Scalable Machine Learning Model Inversion Assessment Pipeline, which presents a novel tool to quantify the risk of data privacy loss from model inversion attacks. Dataset Ownership in the Era of Large Language Models, which provides a comprehensive review of technical approaches for dataset copyright protection. RetinaGuard: Obfuscating Retinal Age in Fundus Images for Biometric Privacy Preserving, which proposes a novel framework to obscure retinal age in medical images while preserving image quality and diagnostic utility.

Sources

Model Unmerging: Making Your Models Unmergeable for Secure Model Sharing

Exposing Privacy Risks in Anonymizing Clinical Data: Combinatorial Refinement Attacks on k-Anonymity Without Auxiliary Information

An Automated, Scalable Machine Learning Model Inversion Assessment Pipeline

Dataset Ownership in the Era of Large Language Models

RetinaGuard: Obfuscating Retinal Age in Fundus Images for Biometric Privacy Preserving

Built with on top of