OpenAI Codex CLI & LLM Router
The OpenAI Codex CLI is an agentic coding tool that runs in your terminal. By default, it is locked to OpenAI’s ecosystem. By configuring Codex to use LLM Router as its base provider, you can:- Unlock Multi-Provider Support: Use Claude 3.5 Sonnet or DeepSeek directly inside the Codex CLI.
- Save Costs: Automatically compress large file reads and terminal outputs using Middle-Out Compression.
- Enhance Security: Redact local
.envvariables and AWS keys before they ever reach the AI provider.
Configure OpenAI Codex
You can easily point Codex to LLM Router by modifying its persistent configuration file.1. Install OpenAI Codex CLI
If you haven’t already, follow the installation instructions on the OpenAI Codex repository to install the CLI tool.2. Configure Environment Variables
Set your LLM Router API key in your shell configuration file (e.g.,~/.zshrc or ~/.bashrc):
3. Set up the Codex Config File
Open the Codex configuration file located at~/.codex/config.toml and add LLM Router as a provider:
~/.codex/config.toml
- Creates a custom provider named
llmrouterthat points to the LLM Router API gateway. - Tells Codex to authenticate using your
LLM_ROUTER_API_KEYenvironment variable. - Creates a default profile that routes your requests to
gpt-4o(with LLM Router handling the optimization in the background).
4. Run Codex
Start Codex using your new LLM Router profile:Advanced Configuration
Use Non-OpenAI Models
Because LLM Router translates all payloads to the OpenAI standard, you can use the Codex CLI with any model. Just update themodel field in your config using the standard LLM Router {provider}/{model} slug format:
~/.codex/config.toml
Warning Note: When using non-OpenAI models (like Claude or Gemini) through
the Codex CLI, the CLI might display warnings about “model metadata not being
found.” These warnings are completely safe to ignore; LLM Router perfectly
handles the translation and routing on the backend.
Define Multiple Routing Profiles
You can set up multiple profiles in yourconfig.toml to quickly switch between models depending on the task:
~/.codex/config.toml