A major concern for enterprise AI adoption is data privacy. When you send proprietary code, customer support tickets, or financial data to an LLM provider (like OpenAI or Anthropic), you need a guarantee that they aren’t saving that data to train their future models. LLM Router itself operates on a strict Zero Data Retention (ZDR) architecture—we never store your prompts or completions. However, the upstream providers you route to might still log your data unless explicitly configured otherwise. To solve this, LLM Router offers the
zdr configuration flag.
How the ZDR Flag Works
When you enablezdr: true in your request, LLM Router will only route your prompt to providers and models that have a strict, contractual Zero Data Retention policy or allow us to send API headers (like X-Opt-Out) that legally prevent them from storing or training on your data.
If your standard routing configuration (e.g., your order or tags) attempts to use a provider that does not support ZDR, LLM Router will automatically block that route and fall back to a compliant provider.
Configuration
You configure this behavior inside thegateway object.
TypeScript
Configuration Properties
| Property | Type | Default | Description |
|---|---|---|---|
zdr | boolean | false | When true, LLM Router will actively filter your fallback order and tag routing to only allow providers that guarantee Zero Data Retention and no model training on API inputs. |
Redaction Synergy
For maximum enterprise security, combine
zdr: true with our Data
Redaction (gateway.redact) features. This ensures that even if a compliant
provider experiences a breach, your most sensitive PII was never transmitted
to them in the first place.