Advances in Secure and Transparent AI Systems

The field of artificial intelligence is moving towards developing more secure and transparent systems. Researchers are focusing on creating methods to detect and prevent backdoor attacks in large language models, as well as designing more efficient and scalable blockchain systems. Additionally, there is a growing interest in developing verifiable and explainable AI models, with a focus on ensuring the integrity and reliability of AI decision-making processes. Noteworthy papers in this area include PoTS, which introduces a verification protocol for detecting backdoor attacks in large language models, and NEMO, which presents a new blockchain execution engine that improves performance under high contention. Other notable works include Nondeterminism-Aware Optimistic Verification for floating-point neural networks, Verifiable Fine-Tuning for LLMs, and TRUST, a decentralized framework for auditing large language model reasoning.

Sources

PoTS: Proof-of-Training-Steps for Backdoor Detection in Large Language Models

NEMO: Faster Parallel Execution for Highly Contended Blockchain Workloads (Full version)

Generalized Methodology for Determining Numerical Features of Hardware Floating-Point Matrix Multipliers: Part I

On-Chain Decentralized Learning and Cost-Effective Inference for DeFi Attack Mitigation

Nondeterminism-Aware Optimistic Verification for Floating-Point Neural Networks

JAX Autodiff from a Linear Logic Perspective (Extended Version)

Verifiable Fine-Tuning for LLMs: Zero-Knowledge Training Proofs Bound to Data Provenance and Policy

Exploiting the Potential of Linearity in Automatic Differentiation and Computational Cryptography

Just-In-Time Piecewise-Linear Semantics for ReLU-type Networks

Policy-Governed RAG - Research Design Study

TRUST: A Decentralized Framework for Auditing Large Language Model Reasoning

Built with on top of