Skip to main content

OpenRouter

Status: ✅ Supported

OpenRouter provides unified access to multiple AI models through a single API.

Supported Models

Anthropic

Model IDNameContext
anthropic/claude-3.5-haikuClaude Haiku 3.5200K
anthropic/claude-haiku-4.5Claude Haiku 4.5200K
anthropic/claude-opus-4Claude Opus 4200K
anthropic/claude-opus-4.1Claude Opus 4.1200K
anthropic/claude-opus-4.5Claude Opus 4.5200K
anthropic/claude-opus-4.6Claude Opus 4.61M
anthropic/claude-opus-4.7Claude Opus 4.71M
anthropic/claude-3.7-sonnetClaude Sonnet 3.7200K
anthropic/claude-sonnet-4Claude Sonnet 4200K
anthropic/claude-sonnet-4.5Claude Sonnet 4.51M
anthropic/claude-sonnet-4.6Claude Sonnet 4.61M

Arcee Ai

Model IDNameContext
arcee-ai/trinity-large-preview:freeTrinity Large Preview131K
arcee-ai/trinity-large-thinkingTrinity Large Thinking262K

Black Forest Labs

Model IDNameContext
black-forest-labs/flux.2-flexFLUX.2 Flex67K
black-forest-labs/flux.2-klein-4bFLUX.2 Klein 4B40K
black-forest-labs/flux.2-maxFLUX.2 Max46K
black-forest-labs/flux.2-proFLUX.2 Pro46K

Bytedance Seed

Model IDNameContext
bytedance-seed/seedream-4.5Seedream 4.54K

Cognitivecomputations

Model IDNameContext
cognitivecomputations/dolphin-mistral-24b-venice-edition:freeUncensored (free)32K

Deepseek

Model IDNameContext
deepseek/deepseek-r1-distill-llama-70bDeepSeek R1 Distill Llama 70B8K
deepseek/deepseek-chat-v3-0324DeepSeek V3 032416K
deepseek/deepseek-v3.1-terminusDeepSeek V3.1 Terminus131K
deepseek/deepseek-v3.1-terminus:exactoDeepSeek V3.1 Terminus (exacto)131K
deepseek/deepseek-v3.2DeepSeek V3.2163K
deepseek/deepseek-v3.2-specialeDeepSeek V3.2 Speciale163K
deepseek/deepseek-chat-v3.1DeepSeek-V3.1163K
deepseek/deepseek-r1DeepSeek: R164K

Google

Model IDNameContext
google/gemini-2.0-flash-001Gemini 2.0 Flash1M
google/gemini-2.5-flashGemini 2.5 Flash1M
google/gemini-2.5-flash-liteGemini 2.5 Flash Lite1M
google/gemini-2.5-flash-lite-preview-09-2025Gemini 2.5 Flash Lite Preview 09-251M
google/gemini-2.5-flash-preview-09-2025Gemini 2.5 Flash Preview 09-251M
google/gemini-2.5-proGemini 2.5 Pro1M
google/gemini-2.5-pro-preview-05-06Gemini 2.5 Pro Preview 05-061M
google/gemini-2.5-pro-preview-06-05Gemini 2.5 Pro Preview 06-051M
google/gemini-3-flash-previewGemini 3 Flash Preview1M
google/gemini-3-pro-previewGemini 3 Pro Preview1M
google/gemini-3.1-flash-lite-previewGemini 3.1 Flash Lite Preview1M
google/gemini-3.1-pro-previewGemini 3.1 Pro Preview1M
google/gemini-3.1-pro-preview-customtoolsGemini 3.1 Pro Preview Custom Tools1M
google/gemma-2-9b-itGemma 2 9B8K
google/gemma-3-12b-itGemma 3 12B131K
google/gemma-3-12b-it:freeGemma 3 12B (free)32K
google/gemma-3-27b-itGemma 3 27B96K
google/gemma-3-27b-it:freeGemma 3 27B (free)131K
google/gemma-3-4b-itGemma 3 4B96K
google/gemma-3-4b-it:freeGemma 3 4B (free)32K
google/gemma-3n-e2b-it:freeGemma 3n 2B (free)8K
google/gemma-3n-e4b-itGemma 3n 4B32K
google/gemma-3n-e4b-it:freeGemma 3n 4B (free)8K
google/gemma-4-26b-a4b-itGemma 4 26B A4B262K
google/gemma-4-26b-a4b-it:freeGemma 4 26B A4B (free)262K
google/gemma-4-31b-itGemma 4 31B262K
google/gemma-4-31b-it:freeGemma 4 31B (free)262K

Inception

Model IDNameContext
inception/mercury-2Mercury 2128K
inception/mercury-edit-2Mercury Edit 2128K

Liquid

Model IDNameContext
liquid/lfm-2.5-1.2b-instruct:freeLFM2.5-1.2B-Instruct (free)131K
liquid/lfm-2.5-1.2b-thinking:freeLFM2.5-1.2B-Thinking (free)131K

Meta Llama

Model IDNameContext
meta-llama/llama-3.2-11b-vision-instructLlama 3.2 11B Vision Instruct131K
meta-llama/llama-3.2-3b-instruct:freeLlama 3.2 3B Instruct (free)131K
meta-llama/llama-3.3-70b-instruct:freeLlama 3.3 70B Instruct (free)131K

Minimax

Model IDNameContext
minimax/minimax-m1MiniMax M11M
minimax/minimax-m2MiniMax M2196K
minimax/minimax-m2.1MiniMax M2.1204K
minimax/minimax-m2.5MiniMax M2.5204K
minimax/minimax-m2.5:freeMiniMax M2.5 (free)204K
minimax/minimax-m2.7MiniMax M2.7204K
minimax/minimax-01MiniMax-011M

Mistralai

Model IDNameContext
mistralai/codestral-2508Codestral 2508256K
mistralai/devstral-2512Devstral 2 2512262K
mistralai/devstral-medium-2507Devstral Medium131K
mistralai/devstral-small-2505Devstral Small128K
mistralai/devstral-small-2507Devstral Small 1.1131K
mistralai/mistral-medium-3Mistral Medium 3131K
mistralai/mistral-medium-3.1Mistral Medium 3.1262K
mistralai/mistral-small-3.1-24b-instructMistral Small 3.1 24B Instruct128K
mistralai/mistral-small-3.2-24b-instructMistral Small 3.2 24B Instruct96K
mistralai/mistral-small-2603Mistral Small 4262K

Moonshotai

Model IDNameContext
moonshotai/kimi-k2Kimi K2131K
moonshotai/kimi-k2-0905Kimi K2 Instruct 0905262K
moonshotai/kimi-k2-0905:exactoKimi K2 Instruct 0905 (exacto)262K
moonshotai/kimi-k2-thinkingKimi K2 Thinking262K
moonshotai/kimi-k2.5Kimi K2.5262K

Nousresearch

Model IDNameContext
nousresearch/hermes-3-llama-3.1-405b:freeHermes 3 405B Instruct (free)131K
nousresearch/hermes-4-405bHermes 4 405B131K
nousresearch/hermes-4-70bHermes 4 70B131K

Nvidia

Model IDNameContext
nvidia/nemotron-3-nano-30b-a3b:freeNemotron 3 Nano 30B A3B (free)256K
nvidia/nemotron-3-super-120b-a12bNemotron 3 Super262K
nvidia/nemotron-3-super-120b-a12b:freeNemotron 3 Super (free)262K
nvidia/nemotron-nano-12b-v2-vl:freeNemotron Nano 12B 2 VL (free)128K
nvidia/nemotron-nano-9b-v2:freeNemotron Nano 9B V2 (free)128K
nvidia/nemotron-nano-9b-v2nvidia-nemotron-nano-9b-v2131K

Openai

Model IDNameContext
openai/gpt-oss-120bGPT OSS 120B131K
openai/gpt-oss-120b:exactoGPT OSS 120B (exacto)131K
openai/gpt-oss-20bGPT OSS 20B131K
openai/gpt-oss-safeguard-20bGPT OSS Safeguard 20B131K
openai/gpt-4.1GPT-4.11M
openai/gpt-4.1-miniGPT-4.1 Mini1M
openai/gpt-4o-miniGPT-4o-mini128K
openai/gpt-5GPT-5400K
openai/gpt-5-chatGPT-5 Chat (latest)400K
openai/gpt-5-codexGPT-5 Codex400K
openai/gpt-5-imageGPT-5 Image400K
openai/gpt-5-miniGPT-5 Mini400K
openai/gpt-5-nanoGPT-5 Nano400K
openai/gpt-5-proGPT-5 Pro400K
openai/gpt-5.1GPT-5.1400K
openai/gpt-5.1-chatGPT-5.1 Chat128K
openai/gpt-5.1-codexGPT-5.1-Codex400K
openai/gpt-5.1-codex-maxGPT-5.1-Codex-Max400K
openai/gpt-5.1-codex-miniGPT-5.1-Codex-Mini400K
openai/gpt-5.2GPT-5.2400K
openai/gpt-5.2-chatGPT-5.2 Chat128K
openai/gpt-5.2-proGPT-5.2 Pro400K
openai/gpt-5.2-codexGPT-5.2-Codex400K
openai/gpt-5.3-codexGPT-5.3-Codex400K
openai/gpt-5.4GPT-5.41M
openai/gpt-5.4-miniGPT-5.4 Mini400K
openai/gpt-5.4-nanoGPT-5.4 Nano400K
openai/gpt-5.4-proGPT-5.4 Pro1M
openai/gpt-oss-120b:freegpt-oss-120b (free)131K
openai/gpt-oss-20b:freegpt-oss-20b (free)131K
openai/o4-minio4 Mini200K

Openrouter

Model IDNameContext
openrouter/elephant-alphaElephant (free)262K
openrouter/freeFree Models Router200K

Prime Intellect

Model IDNameContext
prime-intellect/intellect-3Intellect 3131K

Qwen

Model IDNameContext
qwen/qwen-2.5-coder-32b-instructQwen2.5 Coder 32B Instruct32K
qwen/qwen2.5-vl-72b-instructQwen2.5 VL 72B Instruct32K
qwen/qwen3-235b-a22b-07-25Qwen3 235B A22B Instruct 2507262K
qwen/qwen3-235b-a22b-thinking-2507Qwen3 235B A22B Thinking 2507262K
qwen/qwen3-30b-a3b-instruct-2507Qwen3 30B A3B Instruct 2507262K
qwen/qwen3-30b-a3b-thinking-2507Qwen3 30B A3B Thinking 2507262K
qwen/qwen3-coderQwen3 Coder262K
qwen/qwen3-coder:exactoQwen3 Coder (exacto)131K
qwen/qwen3-coder-30b-a3b-instructQwen3 Coder 30B A3B Instruct160K
qwen/qwen3-coder-flashQwen3 Coder Flash128K
qwen/qwen3-maxQwen3 Max262K
qwen/qwen3-next-80b-a3b-instructQwen3 Next 80B A3B Instruct262K
qwen/qwen3-next-80b-a3b-thinkingQwen3 Next 80B A3B Thinking262K
qwen/qwen3.5-397b-a17bQwen3.5 397B A17B262K
qwen/qwen3.5-plus-02-15Qwen3.5 Plus 2026-02-151M
qwen/qwen3.6-plusQwen3.6 Plus1M
qwen/qwen3.5-flash-02-23Qwen: Qwen3.5-Flash1M

Sourceful

Model IDNameContext
sourceful/riverflow-v2-fast-previewRiverflow V2 Fast Preview8K
sourceful/riverflow-v2-max-previewRiverflow V2 Max Preview8K
sourceful/riverflow-v2-standard-previewRiverflow V2 Standard Preview8K

Stepfun

Model IDNameContext
stepfun/step-3.5-flashStep 3.5 Flash256K

X Ai

Model IDNameContext
x-ai/grok-3Grok 3131K
x-ai/grok-3-betaGrok 3 Beta131K
x-ai/grok-3-miniGrok 3 Mini131K
x-ai/grok-3-mini-betaGrok 3 Mini Beta131K
x-ai/grok-4Grok 4256K
x-ai/grok-4-fastGrok 4 Fast2M
x-ai/grok-4.1-fastGrok 4.1 Fast2M
x-ai/grok-4.20-betaGrok 4.20 Beta2M
x-ai/grok-4.20-multi-agent-betaGrok 4.20 Multi - Agent Beta2M
x-ai/grok-code-fast-1Grok Code Fast 1256K

Xiaomi

Model IDNameContext
xiaomi/mimo-v2-flashMiMo-V2-Flash262K
xiaomi/mimo-v2-omniMiMo-V2-Omni262K
xiaomi/mimo-v2-proMiMo-V2-Pro1M

Z Ai

Model IDNameContext
z-ai/glm-4.5GLM 4.5128K
z-ai/glm-4.5-airGLM 4.5 Air128K
z-ai/glm-4.5-air:freeGLM 4.5 Air (free)128K
z-ai/glm-4.5vGLM 4.5V64K
z-ai/glm-4.6GLM 4.6200K
z-ai/glm-4.6:exactoGLM 4.6 (exacto)200K
z-ai/glm-4.7GLM-4.7204K
z-ai/glm-4.7-flashGLM-4.7-Flash200K
z-ai/glm-5GLM-5202K
z-ai/glm-5-turboGLM-5-Turbo202K
z-ai/glm-5.1GLM-5.1202K

Setup

1. Get API Key

  1. Visit OpenRouter
  2. Create or log in to your account
  3. Go to KeysCreate Key
  4. Copy the key

2. Add to Clawrium

clm provider add my-router --type openrouter

You will be prompted to enter your API key securely.

3. Select Model

Choose from OpenRouter's extensive model list:

  • anthropic/claude-opus-4 (best quality)
  • google/gemini-2.5-pro (competitive pricing)
  • openai/gpt-4o (familiar API)

Configuration

# View provider details
clm provider list

# Change default model
clm provider edit my-router --model anthropic/claude-sonnet-4

# Update API key
clm provider edit my-router --update-key

# Remove provider
clm provider remove my-router

Benefits

  • Model fallback: Automatically route to available models
  • Unified billing: One account for multiple providers
  • Competitive pricing: Often cheaper than direct provider access
  • Model variety: Access to open-source and proprietary models

Pricing

OpenRouter adds a small fee on top of provider costs. Check OpenRouter pricing for current rates.

Usage in Agents

During agent onboarding:

clm agent configure my-agent
# Select my-router when prompted for provider

Model Routing

You can switch models without changing providers:

clm provider edit my-router --model deepseek/deepseek-chat-v3

Then restart your agent to use the new model.

Troubleshooting

"Model not available"

  • Check OpenRouter model status page
  • Some models may have temporary outages
  • Try a fallback model

"Credits exhausted"

  • Add credits in OpenRouter dashboard
  • Set up auto-recharge for production use

Back to Providers