Skip to main content

Documentation Index

Fetch the complete documentation index at:
https://docs.llmrouter.app/llms.txt
Use this file to discover all available models, tags, and features before exploring further.

LiteLLM Integration

LiteLLM is one of the most popular open-source libraries for calling LLMs. It provides a unified OpenAI-compatible interface for 100+ models and providers. Since LLM Router is fully OpenAI-compatible, you can use it seamlessly with LiteLLM to get intelligent routing, Skills, Zero Data Retention, and automatic cost optimization.

Installation

pip install litellm

Basic Configuration

You can use LLM Router with LiteLLM in two ways:
import litellm

response = litellm.completion(
    model="anthropic/claude-opus-4.6",
    messages=[{"role": "user", "content": "Hello, how are you?"}],
    base_url="https://api.llmrouter.app/v1",
    api_key="sk_llmr_your_key_here",   # Your LLM Router API key
)
print(response.choices[0].message.content)

Method 2: Set Environment Variables (Global)

export LITELLM_BASE_URL="https://api.llmrouter.app/v1"
export LITELLM_API_KEY="sk_llmr_your_key_here"
Then use LiteLLM normally:
import litellm

response = litellm.completion(
    model="anthropic/claude-sonnet-4.6",
    messages=[{"role": "user", "content": "Explain how Skills work in LLM Router"}]
)

Using Advanced LLM Router Features

response = litellm.completion(
    model="anthropic/claude-opus-4.6",
    messages=messages,
    base_url="https://api.llmrouter.app/v1",
    api_key="sk_llmr_your_key_here",
    "gateway": {
        "zdr": True,                          # Zero Data Retention
        "skills": {
            "skillIds": ["sk_company-style", "sk_api-guidelines"],
            "enableAutoSearch": True
        },
        "chatHistoryCompression": {
            "enabled": True,
            "score": 0.7
        }
    }
)
models = [
    "anthropic/claude-opus-4.6",
    "anthropic/claude-sonnet-4.6",
    "openai/gpt-5.4",
    "google/gemini-3.1-pro",
    "xai/grok-4.20",
    "deepseek/deepseek-v3.2"
]

Best Practice: Use Dashboard Configuration

Instead of passing options in every request, we strongly recommend configuring defaults in the LLM Router Dashboard:
  • Set default tags (coding, reasoning, ui design, etc.)
  • Enable relevant Skills
  • Turn on Zero Data Retention (ZDR)
  • Configure context optimization rules
This way, your LiteLLM calls automatically benefit from intelligent routing and optimization without extra code.

Async Usage Example

import litellm
import asyncio

async def main():
    response = await litellm.acompletion(
        model="anthropic/claude-sonnet-4.6",
        messages=[{"role": "user", "content": "Write a FastAPI endpoint"}],
        base_url="https://api.llmrouter.app/v1",
        api_key="sk_llmr_your_key_here",
    )
    print(response.choices[0].message.content)

asyncio.run(main())

Pro Tip: LiteLLM + LLM Router is an excellent combination for production applications — you get LiteLLM’s excellent fallback, logging, and observability features on top of LLM Router’s smart routing and cost-saving capabilities.