As your AI application scales, you need to teach the model highly specific workflows — such as “How to write a database migration for our stack” or “How to draft a compliant legal contract in our jurisdiction.” Dumping all these instructions into a single system prompt quickly blows up your context window, increases costs, and hurts performance. LLM Router Skills solves this problem elegantly. A Skill is a lightweight, modular folder that contains a
SKILL.md file with precise instructions, business rules, and references. LLM Router intelligently injects only the relevant skills into the prompt when they are needed.
What is a Skill?
A Skill is simply a directory that contains at minimum aSKILL.md file (and optionally other reference files).
- The
SKILL.mdfile includes step-by-step instructions the AI must follow. - It can reference additional files inside the same folder (API docs, code examples, templates, etc.).
- Instead of building complex RAG pipelines, you just install Skills — and let the router handle context injection.
How to Install Skills
You have two easy ways to add Skills:-
Browse the Skills Catalog
Visit skills.sh or skillsmp.com to explore and install community or official skills with one click/command. -
Add a Custom Skill
Provide the GitHub repository URL of your own skill folder. LLM Router will pull it directly.
stripe_api, react_guidelines).
Smart Skill Injection
When you send a request, you can pass a list of Skill IDs. LLM Router offers two modes:1. Automatic Relevance (Recommended)
SetenableAutoSearch: true.
The router analyzes the user’s prompt, reads the SKILL.md files (and any referenced files inside the skill), and only injects the skills that are actually relevant.
This keeps your context window lean and dramatically reduces cost and latency.
2. Force All Skills
SetenableAutoSearch: false (default).
All listed SKILL.md files are injected regardless of the prompt. Use this when you need guaranteed access to certain instructions.
Using Skills via API
Smart Skill Injection (enableAutoSearch)
If you pass 10 different skillIds in your request, injecting all 10 SKILL.md files into the prompt every time will waste thousands of input tokens.
You can optimize this using the enableAutoSearch flag:
-
enableAutoSearch: true(Recommended): LLM Router acts as an intelligent librarian. It reads the user’s prompt, searches through theSKILL.mdfiles (and their referenced pages) defined in yourskillIdsarray, and only injects the skills that are highly relevant to the current question. (In the example above, the user asked for a billing webhook. The router would dynamically inject thestripe_apiandnode_best_practicesmarkdown instructions, but completely ignoremarketing_copy). -
enableAutoSearch: false(Default): LLM Router forcefully injects the content of all theSKILL.mdfiles listed inskillIdsinto the request, regardless of the user’s prompt. Use this if you want absolute certainty that the model has access to those specific instructions at all times.
Configuration Properties
The skills Object
| Property | Type | Default | Description |
|---|---|---|---|
skillIds | string[] | Required | An array of Skill IDs you have installed via the LLM Router dashboard. |
enableAutoSearch | boolean | false | If true, the Gateway will intelligently analyze the prompt and only inject the specific SKILL.md instructions from your list that are highly relevant to the user’s query. |
Context Optimization Synergy: Using
enableAutoSearch: true alongside our
Chat Optimization features ensures your prompts remain incredibly lean,
keeping latency low and drastically reducing your upstream API costs.