Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.llmrouter.app/llms.txt
LLM Router is 100% compatible with the official OpenAI SDK. This means you can use the familiar openai package in Node.js or Python to access any model (including Anthropic’s Claude and Google’s Gemini) without changing your code structure.

1. Chat Completions API

The Chat Completions API is the standard way to interact with LLMs. To use LLM Router, you simply change the baseURL and apiKey when initializing the client.
import OpenAI from "openai";

// 1. Initialize with LLM Router credentials
const client = new OpenAI({
  baseURL: "https://api.llmrouter.app/v1",
  apiKey: process.env.LLM_ROUTER_API_KEY,
});

async function main() {
  const response = await client.chat.completions.create({
    // 2. Use ANY supported model (prefixed with provider slug)
    model: "anthropic/claude-3-5-sonnet",
    messages: [{ role: "user", content: "Explain quantum physics." }],

    // 3. (Optional) Pass LLM Router features natively at the root
    // @ts-expect-error - Custom LLM Router extension
    gateway: {
      chatHistoryCompression: { enabled: true, score: 0.6 },
      redact: { token: true },
    },
  });

  console.log(response.choices[0].message.content);
}
main();

2. OpenAI Messages API

If you are using the new OpenAI Developer Messages API (often used for specific agentic workflows or structured outputs), LLM Router supports this endpoint natively as well. You use the exact same client configuration, but call the client.beta.messages methods.
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.llmrouter.app/v1",
  apiKey: process.env.LLM_ROUTER_API_KEY,
});

async function main() {
  const response = await client.beta.messages.create({
    model: "openai/o1-mini",
    messages: [{ role: "user", content: "Write a sorting algorithm." }],
    max_tokens: 1024,

    // Send routing logic and context compression rules
    // @ts-expect-error - Custom LLM Router extension
    gateway: {
      chatHistoryCompression: { enabled: true, score: 0.5 },
    },
  });

  console.log(response.content[0].text);
}
main();