Zero knowledge. Zero retention. Zero compromise.
ProvnZero is a high-performance Zero Data Retention (ZDR) proxy for AI APIs. If you are building enterprise AI applications, sending sensitive data (PII, trade secrets, medical records) in plaintext to cloud LLMs is a massive liability. ProvnZero fundamentally solves this: we seal the data on the client, unwrap it securely in memory only at the moment of API transmission, and cryptographically shred it instantly afterward.
ProvnZero's bold claims are backed by Rust's strict memory model. Unlike garbage-collected languages where plaintext strings might linger in memory indefinitely, Rust allows ProvnZero to allocate highly restricted SecureBuffer memory regions and cryptographically wipe them using OS-level constructs (zeroize) the exact instant they drop out of scope.
ProvnZero is built in high-performance Rust and is designed to sit directly between your client applications and the LLM providers. Because it is a standalone binary (and containerizable), you can run it anywhere:
- Local Edge Gateway: Run ProvnZero directly on the same physical server or local network as your AI agents. The agents talk to
localhost:3001over the local loopback, and ProvnZero handles the secure outbound tunnel to OpenAI. - Cloud Service Proxy (e.g., Railway, Heroku): Deploy ProvnZero to a generic cloud provider. Your remote clients (mobile apps, web apps) seal their data via the SDK and send it to your hosted ProvnZero URL. It is pre-configured for instant Railway deployments out-of-the-box.
- Enterprise VPC: Run it inside your corporate Virtual Private Cloud. All internal company apps route internal LLM requests through the proxy to ensure no prompts leak into standard company HTTP logs.
- Confidential Computing (Future): The Rust binary is perfectly sized to run inside AWS Nitro Enclaves or Intel SGX, providing a hardware-rooted guarantee of the ZDR claims.
- Standard Cryptography: Uses RFC 9180 HPKE (Hybrid Public Key Encryption) for client-side payload sealing. The proxy only sees opaque byte arrays over the wire.
- Zero Data Retention: Powered by Rust's memory safety guarantees. Exclusively uses
SecureBufferand thezeroizecrate to cryptographically wipe ephemeral decryption keys and plaintext from RAM immediately after the LLM responds. - Cryptographic Audit Trails: Generating text logs defeats the purpose of ZDR. Instead, ProvnZero generates Ed25519-signed VEX (Vulnerability Exploitability eXchange) receipts, producing mathematical proof that an inference request was safely handled without emitting the prompt itself.
- Universal Compatibility: Out-of-the-box support for OpenAI, Anthropic, and DeepSeek. Furthermore, it supports any OpenAI-compatible endpoint on earth (e.g., Groq, OpenRouter, vLLM, Ollama) via the
OPENAI_BASE_URLoverride. - Production Ready: Built on Axum's async runtime and hardened with
tower-governorrate-limiting to prevent API abuse, alongsidetokiograceful shutdown handlers to prevent memory leaks during container orchestration.
This repository is a monorepo containing:
- provnzero-proxy: The core Rust (Axum) proxy service. This is the server you deploy.
- provnzero-sdk: The TypeScript wrapper for client integration. This handles the complex client-side HPKE math before your app makes the network request.
graph LR
Client[Client App] -- "1. HPKE Sealed Request" --> Proxy[ProvnZero Proxy]
Proxy -- "2. Decrypted Prompt" --> LLM[LLM Provider]
LLM -- "3. Response" --> Proxy
Proxy -- "4. HPKE Sealed Response \n + VEX Receipt" --> Client
subgraph "Zero Data Retention Zone"
Proxy
end
In live testing against the Groq API (llama3-8b-8192), ProvnZero passed 100% of automated integration tests (12/12).
This validates three core proxy capabilities:
- Cryptography: HPKE sealed payloads unwrap seamlessly.
- Concurrency: Parallel, rapid-fire requests resolve perfectly under
tower-governorrate limits. - ZDR Enforcement: Memory is destroyed (
zeroized) after every round-trip, yielding mathematically valid Ed25519 VEX receipts.
ProvnZero is natively containerized and configured for Railway. Clicking the button above will automatically provision a production-ready, zero-retention proxy instance with strict Axum rate limits pre-configured.
Required variables during setup:
OPENAI_API_KEY: Required to authenticate outbound traffic.- Optional: Set
OPENAI_BASE_URLif you wish to proxy to a different provider (e.g., Anthropic, Deepseek, locally hosted vLLM).
Once deployed, point your client application (using the provnzero-sdk) to your new Railway .up.railway.app URL!
- Deploy the Proxy manually: Follow the Proxy README to start the Rust server locally.
- Integrate the SDK: Follow the SDK README to initialize the client in your Node/TypeScript application.
If ProvnZero solves a critical infrastructure problem for you, please give this repo a ⭐️ on GitHub! Your support drives engagement and signals trust to enterprise adopters.
- Want to hack on ProvnZero? See our Contributing Guide.
- Discovered a vulnerability? Please review our Security Policy.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.