Skip to main content
POST
/
messages
Create message (Anthropic)
curl --request POST \
  --url https://api.llmrouter.app/v1/messages \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "messages": [
    {
      "role": "system",
      "content": "<string>",
      "name": "<string>"
    }
  ],
  "temperature": 1,
  "max_tokens": 123,
  "stream": false,
  "gateway": {
    "zdr": true,
    "order": [
      "<string>"
    ],
    "only": [
      "<string>"
    ],
    "planningTriggerScore": 0.6,
    "imageGenerationModel": "<string>",
    "tags": [
      {
        "name": "<string>",
        "description": "<string>",
        "models": [
          "<string>"
        ]
      }
    ],
    "chatHistoryCompression": {
      "enabled": true,
      "score": 0.6
    },
    "mediaOptimization": false,
    "toolOptimization": {
      "enabled": true,
      "acceptScore": 0.5,
      "alwaysInclude": [
        "<string>"
      ]
    },
    "redact": {
      "email": true,
      "phone": true,
      "ip": true,
      "uuid": true,
      "token": true,
      "credit_card": true,
      "iban": true,
      "ssn": true,
      "mac": true,
      "custom": [
        "<string>"
      ]
    }
  }
}
'
{
  "id": "<string>",
  "type": "message",
  "role": "assistant",
  "model": "<string>",
  "content": [
    {
      "type": "text",
      "text": "<string>"
    }
  ],
  "stop_reason": "<string>",
  "stop_sequence": "<string>",
  "usage": {
    "input_tokens": 123,
    "output_tokens": 123,
    "planning_usage_details": {
      "prompt_tokens": 123,
      "completion_tokens": 123,
      "total_tokens": 123,
      "cost": 123
    }
  }
}

Authorizations

Authorization
string
header
required

LLM Router API Key (e.g., sk-router-...)

Body

application/json

Accepts any additional properties supported by the upstream model (e.g., tools, top_p, response_format).

model
string
required

ID of the model to use (e.g., anthropic/claude-3-5-sonnet)

messages
object[]
required
temperature
number
default:1
max_tokens
integer
stream
boolean
default:false
gateway
object

Custom LLM Router configurations injected into the request.

Response

Successful response

id
string
type
string
default:message
role
string
default:assistant
model
string
content
object[]
stop_reason
string
stop_sequence
string
usage
object