Skip to main content

When users interact with AI applications (especially coding assistants or customer support bots), they frequently—and accidentally—paste sensitive information into the chat. A developer might paste a production database connection string containing a password. A customer might paste their credit card number or SSN into a support ticket. Sending this raw data to third-party LLM providers (like OpenAI or Anthropic) creates massive security and compliance risks (GDPR, HIPAA, SOC2). LLM Router’s Zero-Trust Data Redaction solves this. It scans every incoming prompt in milliseconds and automatically masks sensitive data before the request is forwarded to the upstream AI provider.

How Redaction Works

LLM Router uses a combination of high-speed Regex patterns and localized entity recognition to detect sensitive data formats. When a match is found, the sensitive string is replaced with a safe placeholder token (e.g., c****t@g****l.com or 44** **** **** ***9). The AI model processes the safe placeholder, understands the context perfectly, and returns an answer without ever seeing the raw data.

Configuration

You configure exactly which types of data you want to mask inside the gateway.redact object.
node.js
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.llmrouter.app/v1",
  apiKey: process.env.LLM_ROUTER_API_KEY,
});

async function main() {
  const response = await client.chat.completions.create({
    model: "claude-3-5-sonnet", // Or leave blank if using Tags
    messages: [
      {
        role: "user",
        content:
          "Debug my database connection: postgres://admin:superSecret99!@db.aws.com:5432. Also, my email is john.doe@company.com and my secret project is Project Apollo.",
      },
    ],

    // @ts-expect-error - Custom LLM Router extension
    gateway: {
      redact: {
        // Toggle specific PII detectors
        email: true,
        token: true, // Detects passwords, API keys, JWTs

        // Hide proprietary company terms
        custom: ["Project Apollo", "AcmeCorp Internal API"],
      },
    },
  });

  console.log(response.choices[0].message.content);
}
main();

Visualizing the Redaction

What you sent to LLM Router:
“Debug my database connection: postgres://admin:superSecret99!@db.aws.com:5432. Also, my email is john.doe@company.com and my secret project is Project Apollo.”
What LLM Router sent to Anthropic:
“Debug my database connection: postgres://admin:*****@d****om:5432. Also, my email is jo****oe@co****ny.com and my secret project is ****.”
The AI can successfully debug the database structure without ever seeing the actual password, email, or your company’s secret project name.

Configuration Properties

You can toggle individual detectors based on your specific compliance needs. All properties are optional booleans (true to enable redaction, false to disable).

The redact Object

PropertyTypeDescription
emailbooleanDetects standard email addresses (e.g., user@domain.com).
phonebooleanDetects international and domestic phone numbers.
ipbooleanDetects IPv4 and IPv6 addresses.
uuidbooleanDetects standard UUID strings (often used as internal system IDs).
tokenbooleanHighly Recommended: Aggressively detects API Keys (OpenAI, AWS, Stripe), JWTs, Bearer tokens, and passwords embedded in connection strings.
credit_cardbooleanDetects major credit card formats (Visa, Mastercard, Amex).
ibanbooleanDetects International Bank Account Numbers.
ssnbooleanDetects US Social Security Numbers.
macbooleanDetects MAC addresses.

Custom Keyword Redaction

PropertyTypeDescription
customstring[]An array of exact strings to aggressively redact. Perfect for hiding proprietary code names, internal employee names, or specific competitor names you don’t want leaking into AI training data.

Zero Data Retention Guarantee

Even if you choose not to enable redaction, LLM Router operates on a strict Zero Data Retention architecture. We process your prompts in ephemeral memory to route them, and we never store your prompt data in our databases.