Skip to main content

OpenAI Codex CLI & LLM Router

The OpenAI Codex CLI is an agentic coding tool that runs in your terminal. By default, it is locked to OpenAI’s ecosystem. By configuring Codex to use LLM Router as its base provider, you can:
  • Unlock Multi-Provider Support: Use Claude 3.5 Sonnet or DeepSeek directly inside the Codex CLI.
  • Save Costs: Automatically compress large file reads and terminal outputs using Middle-Out Compression.
  • Enhance Security: Redact local .env variables and AWS keys before they ever reach the AI provider.

Configure OpenAI Codex

You can easily point Codex to LLM Router by modifying its persistent configuration file.

1. Install OpenAI Codex CLI

If you haven’t already, follow the installation instructions on the OpenAI Codex repository to install the CLI tool.

2. Configure Environment Variables

Set your LLM Router API key in your shell configuration file (e.g., ~/.zshrc or ~/.bashrc):
export LLM_ROUTER_API_KEY="sk-router-your-api-key"
After adding this, reload your shell configuration:
source ~/.zshrc

3. Set up the Codex Config File

Open the Codex configuration file located at ~/.codex/config.toml and add LLM Router as a provider:
~/.codex/config.toml
[model_providers.llmrouter]
name = "LLM Router"
base_url = "https://api.llmrouter.app/v1"
env_key = "LLM_ROUTER_API_KEY"
wire_api = "responses"

[profiles.llmrouter]
model_provider = "llmrouter"
model = "openai/gpt-4o" # You can use any provider here
What this configuration does:
  • Creates a custom provider named llmrouter that points to the LLM Router API gateway.
  • Tells Codex to authenticate using your LLM_ROUTER_API_KEY environment variable.
  • Creates a default profile that routes your requests to gpt-4o (with LLM Router handling the optimization in the background).

4. Run Codex

Start Codex using your new LLM Router profile:
codex --profile llmrouter
All your requests, file reads, and terminal commands are now being securely routed, compressed, and redacted by LLM Router.

Advanced Configuration

Use Non-OpenAI Models

Because LLM Router translates all payloads to the OpenAI standard, you can use the Codex CLI with any model. Just update the model field in your config using the standard LLM Router {provider}/{model} slug format:
~/.codex/config.toml
[profiles.llmrouter]
model_provider = "llmrouter"
model = "anthropic/claude-3-5-sonnet"
Warning Note: When using non-OpenAI models (like Claude or Gemini) through the Codex CLI, the CLI might display warnings about “model metadata not being found.” These warnings are completely safe to ignore; LLM Router perfectly handles the translation and routing on the backend.

Define Multiple Routing Profiles

You can set up multiple profiles in your config.toml to quickly switch between models depending on the task:
~/.codex/config.toml
[model_providers.llmrouter]
name = "LLM Router"
base_url = "https://api.llmrouter.app/v1"
env_key = "LLM_ROUTER_API_KEY"
wire_api = "responses"

# Default balanced profile
[profiles.llmrouter]
model_provider = "llmrouter"
model = "openai/gpt-4o"

# Fast, cheap profile for simple refactoring
[profiles.fast]
model_provider = "llmrouter"
model = "groq/llama-3.1-8b-instant"

# Heavy reasoning profile for complex bugs
[profiles.reasoning]
model_provider = "llmrouter"
model = "deepseek/deepseek-reasoner"

# Best coding profile
[profiles.claude]
model_provider = "llmrouter"
model = "anthropic/claude-3-5-sonnet"
Now, you can instantly switch models directly from your terminal depending on how hard the task is:
codex --profile fast
codex --profile claude