Skip to main content

By default, LLM Router dynamically chooses the best upstream provider based on real-time latency and uptime. However, you can override this behavior to explicitly dictate which providers handle your requests, and in what fallback sequence, using the only and only configurations.

Configuring Provider Fallbacks (order)

Use the order array to define a strict priority list of providers. If the first provider fails, times out, or hits a rate limit, LLM Router will automatically fall back to the next provider in the array.
TypeScript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.llmrouter.app/v1",
  apiKey: process.env.LLM_ROUTER_API_KEY,
});

async function main() {
  const response = await client.chat.completions.create({
    model: "anthropic/claude-opus-4.6",
    messages: [{ role: "user", content: "Write a sorting algorithm." }],
    gateway: {
      order: ["bedrock", "anthropic", "vertex"],
    },
  });

  console.log(response.choices[0].message.content);
}
main();
In this example:
  1. LLM Router will first attempt to route the Opus 4.6 Sonnet request through Amazon Bedrock.
  2. If Bedrock is unavailable or fails, it will seamlessly fall back to Anthropic’s direct API.
  3. If Anthropic fails, it will try Google Vertex AI.
  4. Other providers are still available but will only be attempted after this specified sequence is exhausted.

Strict Whitelisting (only)

If you have strict compliance, data privacy, or billing requirements, you can use the only array. This creates a hard whitelist. LLM Router will only consider the providers listed here.
TypeScript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.llmrouter.app/v1",
  apiKey: process.env.LLM_ROUTER_API_KEY,
});

async function main() {
  const response = await client.chat.completions.create({
    model: "meta/llama-4-maverick",
    messages: [{ role: "user", content: "Hello!" }],
    gateway: {
      only: ["groq", "togetherai"], // Strict whitelist
    },
  });
}
In this example:
  • Restriction: Only groq and togetherai will be considered for routing and fallbacks.
  • Error on Mismatch: If neither of these providers are available (or if the requested model doesn’t exist on them), the request will immediately fail with an error. LLM Router will never attempt to route to an unlisted provider.

Using only and order Together

When you provide both only and order, the only filter is applied first to establish the allowed security boundary. Then, order dictates the priority within that boundary. Essentially, the final routing path is the intersection of the two arrays.
TypeScript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.llmrouter.app/v1",
  apiKey: process.env.LLM_ROUTER_API_KEY,
});

async function main() {
  const response = await client.chat.completions.create({
    model: "anthropic/claude-opus-4.6",
    messages: [{ role: "user", content: "Analyze this data." }],
    gateway: {
      only: ["anthropic", "vertex"],
      order: ["vertex", "bedrock", "anthropic"],
    },
  });
}
Resulting Behavior: The final fallback order will be vertexanthropic. (Note: bedrock was in the order list, but because it was not in the only whitelist, it is completely ignored).

Quick Reference

OptionTypeDescription
orderstring[]Provider slugs in the exact order they should be prioritized for fallbacks.
onlystring[]A strict whitelist. Routing is restricted exclusively to these provider slugs.