Advances in Secure Computing and Artificial Intelligence

The field of secure computing and artificial intelligence is rapidly evolving, with a focus on developing innovative solutions to protect sensitive data and ensure the integrity of machine learning models. Recent research has explored the use of functional encryption, multi-party computation, and zero-knowledge proofs to enable secure neural network training and inference. Additionally, there has been a growing interest in developing privacy-preserving protocols for decentralized applications, such as secure multi-party computation and homomorphic encryption. Notable papers in this area include: Functional Encryption in Secure Neural Network Training, which presents an attack on neural networks that use functional encryption for secure training and proposes two solutions to address this vulnerability. VeriLLM, a publicly verifiable protocol for decentralized language model inference that achieves security under a one-honest-verifier assumption and attains near-negligible verification cost. PRIVMARK, a secure multi-party computation-based private large language model watermarking framework that enables multiple parties to collaboratively watermark a model's output without exposing the model's weights. Calyx, the first privacy-preserving multi-token optimistic-Rollup protocol that guarantees full payment privacy for all transactions and supports atomic execution of multiple transactions. Lattica, a decentralized cross-NAT communication framework designed to support distributed AI systems that integrates NAT traversal mechanisms, a decentralized data store, and a content discovery layer. Sentry, a novel GPU-based framework that verifies the authenticity of machine learning artifacts by implementing cryptographic signing and verification for datasets and models. NoMod, a non-modular attack on module learning with errors that treats wrap-arounds as statistical corruption and casts secret recovery as robust linear estimation.

Sources

Functional Encryption in Secure Neural Network Training: Data Leakage and Practical Mitigations

Eliminating Exponential Key Growth in PRG-Based Distributed Point Functions

DNS in the Time of Curiosity: A Tale of Collaborative User Privacy Protection

LLUAD: Low-Latency User-Anonymized DNS

VeriLLM: A Lightweight Framework for Publicly Verifiable Decentralized Inference

JSProtect: A Scalable Obfuscation Framework for Mini-Games in WeChat

PRIVMARK: Private Large Language Models Watermarking with MPC

Optimizing Privacy-Preserving Primitives to Support LLM-Scale Applications

Calyx: Privacy-Preserving Multi-Token Optimistic-Rollup Protocol

Lattica: A Decentralized Cross-NAT Communication Framework for Scalable AI Inference and Training

Sentry: Authenticating Machine Learning Artifacts on the Fly

Integrated Security Mechanisms for Weight Protection in Memristive Crossbar Arrays

NoMod: A Non-modular Attack on Module Learning With Errors

Built with on top of