Advances in Secure and Efficient Machine Learning

The field of machine learning is moving towards a direction of increased security and efficiency, with a focus on addressing the challenges of data privacy and scalability. Researchers are exploring new techniques such as secure multi-party computation, federated learning, and meta-reinforcement learning to improve the security and efficiency of machine learning models. Notable papers in this area include one that proposes a novel framework for private inference using a helper-assisted malicious security dishonest majority model, which achieves state-of-the-art performance in terms of efficiency and accuracy. Another paper introduces a self-supervised learning enhanced hijacking attack framework for vertical federated learning, highlighting the potential vulnerabilities of this approach. Additionally, a paper on split learning and function secret sharing demonstrates the effectiveness of this approach in reducing communication and computational costs while maintaining high security guarantees.

Sources

Efficient Private Inference Based on Helper-Assisted Malicious Security Dishonest Majority MPC

HASSLE: A Self-Supervised Learning Enhanced Hijacking Attack on Vertical Federated Learning

Split Happens: Combating Advanced Threats with Split Learning and Function Secret Sharing

Meta-Reinforcement Learning for Fast and Data-Efficient Spectrum Allocation in Dynamic Wireless Networks

Resilient Time-Sensitive Networking for Industrial IoT: Configuration and Fault-Tolerance Evaluation

A Deep Reinforcement Learning Method for Multi-objective Transmission Switching

Towards Ultra-Reliable 6G in-X Subnetworks: Dynamic Link Adaptation by Deep Reinforcement Learning

Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks

A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning

Safeguarding Federated Learning-based Road Condition Classification

Built with on top of