Advances in Secure and Private Machine Learning

The field of machine learning is moving towards greater emphasis on security and privacy, with several recent papers presenting innovative solutions to protect sensitive data and models. One of the key areas of focus is on developing protocols that enable secure computation on private data, such as multi-party private set operations and differentially private optimization. These protocols aim to balance the need for accurate model training with the need to protect sensitive information. Another area of research is on developing new attacks and defenses for machine learning models, including novel model-stealing attacks and mitigation strategies. Noteworthy papers include SONNI, which proposes a novel results-checking protocol to protect against model-stealing attacks, and Differentially Private Quasi-Concave Optimization, which presents a generic differentially private optimizer for approximated quasi-concave functions. Additionally, the paper on Multi-Party Private Set Operations from Predicative Zero-Sharing presents a highly versatile framework for secure computation on private sets, and the paper on Federated One-Shot Learning with Data Privacy and Objective-Hiding presents a novel approach to federated learning that addresses both data privacy and objective hiding.

Sources

SONNI: Secure Oblivious Neural Network Inference

Differentially Private Quasi-Concave Optimization: Bypassing the Lower Bound and Application to Geometric Problems

Multi-Party Private Set Operations from Predicative Zero-Sharing

New Capacity Bounds for PIR on Graph and Multigraph-Based Replicated Storage

Federated One-Shot Learning with Data Privacy and Objective-Hiding

Bilateral Differentially Private Vertical Federated Boosted Decision Trees

Built with on top of