Documentation Index
Fetch the complete documentation index at: https://docs.llmrouter.app/llms.txt
Use this file to discover all available pages before exploring further.
Eigent & LLM Router
Eigent is an open-source “coworker” agent that runs directly on your desktop. It uses a multi-agent architecture to perform browser automation, terminal automation, and file system operations, acting much like a human worker operating in a real desktop environment.
Because Eigent interacts heavily with your local machine, routing its traffic through LLM Router is highly recommended. It ensures that any local .env files or API keys it accidentally reads are redacted before reaching external LLM providers, and it compresses bloated terminal/browser outputs to save you massive amounts of input tokens.
Important: When using LLM Router with Eigent, you must configure the
Custom API Host to point to https://api.llmrouter.app/v1 to ensure API
compatibility.
Step 1: Installing Eigent
Choose the installation method that best suits your needs:
1. Download Eigent
Visit eigent.ai and download the latest version for your platform (macOS 11+ or Windows).2. Install the Application
- macOS: Open the downloaded
.dmg file and drag Eigent into your Applications folder.
- Windows: Run the downloaded
.exe installer and follow the on-screen instructions.
3. Launch Eigent
Open the application to get started.For developers who want to run Eigent locally from source code.Prerequisites:
- Node.js >= 18.0.0
- Python >= 3.12
- Docker (recommended) or PostgreSQL 15
1. Clone the Repository
git clone https://github.com/eigent-ai/eigent.git
cd eigent
2. Start Backend Services
cd server
cp .env.example .env
docker compose up -d
3. Start Frontend Service
Step 2: Configuring LLM Router in Eigent
Eigent allows you to configure custom API endpoints. We will set this up to point directly to your LLM Router instance.
1. Access Application Settings
Launch Eigent and navigate to the Home Page, then click on the Settings tab (usually a gear icon).
2. Locate Model Configuration
In the Settings menu, find and select the Models section. Scroll down to the Custom Model area and look for the configuration card (often labeled as Custom OpenAI Config or similar).
3. Enter Configuration Details
Click on the custom configuration card and fill in the following information to connect to LLM Router:
- API Key: Enter your LLM Router API Key (obtain from the LLM Router Dashboard). It usually starts with
sk-router-.
- API Host / Base URL: Enter
https://api.llmrouter.app/v1 (Make sure not to include /chat/completions at the end).
- Model Type: Enter the specific model you want Eigent to use, prefixed with the provider slug.
- Example 1:
anthropic/claude-3-5-sonnet (Best for complex desktop automation).
- Example 2:
openai/gpt-4o
Click Save to apply your changes.
4. Set as Default
Once saved, click the “Set as Default” button on the configuration card to ensure Eigent routes all its agentic tasks through LLM Router.
Step 3: The LLM Router Advantage
With the configuration complete, your Eigent agents are now powered by LLM Router.
Because Eigent executes tasks autonomously on your machine, the safety and cost-saving layers of LLM Router become critical:
- Zero-Trust PII Redaction: If an Eigent agent reads a local config file containing your AWS keys or Stripe tokens while trying to debug a project, LLM Router will automatically mask them into
[TOKEN_REDACTED] before sending the context to Claude or GPT-4o.
- Context Pruning: Agents often “loop” and read the same files multiple times. LLM Router will intelligently compress the middle of these repetitive file reads, saving you thousands of tokens per task.
- Provider Resilience: If Anthropic goes down while Eigent is in the middle of a complex multi-step browser automation task, LLM Router will seamlessly fall back to OpenAI, preventing the agent from crashing and losing its progress.