Somrat Sorkar commited on
Commit
b19cd4a
·
1 Parent(s): 837da99

Add OpenRouter support + 15+ popular models (Qwen, Grok, MiMo, Seed, Nemotron, GLM, Mercury, etc.)

Browse files
Files changed (3) hide show
  1. .env.example +36 -0
  2. README.md +73 -0
  3. start.sh +22 -2
.env.example CHANGED
@@ -55,6 +55,42 @@ LLM_API_KEY=your_api_key_here
55
  # - groq/mixtral-8x7b-32768
56
  # - groq/llama2-70b-4096
57
  #
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
  # Or any other provider supported by OpenClaw (format: provider/model-name)
59
  LLM_MODEL=anthropic/claude-sonnet-4-5
60
 
 
55
  # - groq/mixtral-8x7b-32768
56
  # - groq/llama2-70b-4096
57
  #
58
+ # Qwen:
59
+ # - qwen/qwen3.6-plus-preview (free, 1M context)
60
+ # - qwen/qwen3.5-35b-a3b
61
+ # - qwen/qwen3.5-9b
62
+ #
63
+ # xAI Grok:
64
+ # - x-ai/grok-4.20-beta
65
+ # - x-ai/grok-4.20-multi-agent-beta
66
+ #
67
+ # NVIDIA:
68
+ # - nvidia/nemotron-3-super-120b-a12b
69
+ # - nvidia/nemotron-3-super-120b-a12b (free)
70
+ #
71
+ # Reka:
72
+ # - reka/reka-edge
73
+ #
74
+ # Xiaomi:
75
+ # - xiaomi/mimo-v2-pro (1M context)
76
+ # - xiaomi/mimo-v2-omni (256K context, multimodal)
77
+ #
78
+ # ByteDance Seed:
79
+ # - bytedance-seed/seed-2.0-lite
80
+ # - bytedance-seed/seed-2.0-mini
81
+ #
82
+ # Z.ai GLM:
83
+ # - z-ai/glm-5-turbo
84
+ #
85
+ # KwaiPilot:
86
+ # - kwaipilot/kat-coder-pro-v2
87
+ #
88
+ # OpenRouter (any model via OpenRouter proxy):
89
+ # - openrouter/google/lyria-3-pro-preview (music generation)
90
+ # - openrouter/inception/mercury-2 (fast reasoning)
91
+ # Note: With OpenRouter, you can access 100+ models with a single API key!
92
+ # See https://openrouter.ai/models for complete list
93
+ #
94
  # Or any other provider supported by OpenClaw (format: provider/model-name)
95
  LLM_MODEL=anthropic/claude-sonnet-4-5
96
 
README.md CHANGED
@@ -189,6 +189,79 @@ LLM_MODEL=groq/mixtral-8x7b-32768
189
  Models: `groq/mixtral-8x7b-32768` · `groq/llama2-70b-4096`
190
  Get key from: [Groq Console](https://console.groq.com)
191
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
192
  ### Any Other Provider
193
  HuggingClaw supports **any LLM provider** that OpenClaw supports. Just use:
194
  ```
 
189
  Models: `groq/mixtral-8x7b-32768` · `groq/llama2-70b-4096`
190
  Get key from: [Groq Console](https://console.groq.com)
191
 
192
+ ### Qwen
193
+ ```
194
+ LLM_API_KEY=your_qwen_api_key
195
+ LLM_MODEL=qwen/qwen3.6-plus-preview
196
+ ```
197
+ Models: `qwen/qwen3.6-plus-preview` (free!) · `qwen/qwen3.5-35b-a3b` · `qwen/qwen3.5-9b`
198
+ Get key from: [Qwen API](https://dashscope.aliyun.com)
199
+
200
+ ### xAI (Grok)
201
+ ```
202
+ LLM_API_KEY=your_xai_api_key
203
+ LLM_MODEL=x-ai/grok-4.20-beta
204
+ ```
205
+ Models: `x-ai/grok-4.20-beta` · `x-ai/grok-4.20-multi-agent-beta`
206
+ Get key from: [xAI Console](https://console.x.ai)
207
+
208
+ ### NVIDIA (Nemotron)
209
+ ```
210
+ LLM_API_KEY=your_nvidia_api_key
211
+ LLM_MODEL=nvidia/nemotron-3-super-120b-a12b
212
+ ```
213
+ Models: `nvidia/nemotron-3-super-120b-a12b` · `nvidia/nemotron-3-super-120b-a12b` (free)
214
+ Get key from: [NVIDIA API](https://api.nvidia.com)
215
+
216
+ ### Xiaomi (MiMo)
217
+ ```
218
+ LLM_API_KEY=your_xiaomi_api_key
219
+ LLM_MODEL=xiaomi/mimo-v2-pro
220
+ ```
221
+ Models: `xiaomi/mimo-v2-pro` (1M context) · `xiaomi/mimo-v2-omni` (multimodal)
222
+ Get key from: [Xiaomi](https://xiaoai.xiaomi.com)
223
+
224
+ ### ByteDance (Seed)
225
+ ```
226
+ LLM_API_KEY=your_bytedance_api_key
227
+ LLM_MODEL=bytedance-seed/seed-2.0-lite
228
+ ```
229
+ Models: `bytedance-seed/seed-2.0-lite` · `bytedance-seed/seed-2.0-mini`
230
+ Get key from: [ByteDance](https://www.volcengine.com)
231
+
232
+ ### Z.ai (GLM)
233
+ ```
234
+ LLM_API_KEY=your_zai_api_key
235
+ LLM_MODEL=z-ai/glm-5-turbo
236
+ ```
237
+ Models: `z-ai/glm-5-turbo`
238
+ Get key from: [Z.ai](https://z.ai)
239
+
240
+ ### OpenRouter (100+ models via single API)
241
+ ```
242
+ LLM_API_KEY=your_openrouter_api_key
243
+ LLM_MODEL=openrouter/google/lyria-3-pro-preview
244
+ ```
245
+ **Popular models via OpenRouter:**
246
+ - `openrouter/openai/gpt-5.4-pro` — Latest OpenAI (1M context)
247
+ - `openrouter/openai/gpt-5.4-mini` — Fast, efficient OpenAI
248
+ - `openrouter/google/gemini-3.1-flash-lite-preview` — Google's latest
249
+ - `openrouter/anthropic/claude-opus-4-6` — Latest Claude via OpenRouter
250
+ - `openrouter/mistral/mistral-small-2603` — Mistral's latest
251
+ - `openrouter/inception/mercury-2` — Ultra-fast reasoning (1000 tok/sec)
252
+ - `openrouter/qwen/qwen3.6-plus-preview` — Free tier available!
253
+ - `openrouter/x-ai/grok-4.20-beta` — xAI's latest Grok
254
+ - `openrouter/nvidia/nemotron-3-super-120b-a12b` — NVIDIA's powerhouse
255
+ - `openrouter/xiaomi/mimo-v2-pro` — Xiaomi's 1M context model
256
+
257
+ **Why OpenRouter?**
258
+ - Single API key for 100+ models
259
+ - Unified pricing and routing
260
+ - Auto-fallback to other models
261
+ - No vendor lock-in
262
+
263
+ Get key from: [OpenRouter.ai](https://openrouter.ai) (free tier available!)
264
+
265
  ### Any Other Provider
266
  HuggingClaw supports **any LLM provider** that OpenClaw supports. Just use:
267
  ```
start.sh CHANGED
@@ -40,7 +40,9 @@ if [[ "$LLM_MODEL" == "anthropic/gemini"* ]]; then
40
  fi
41
 
42
  # Auto-detect and set provider-specific API key from model name
43
- if [[ "$LLM_MODEL" == "google/"* ]]; then
 
 
44
  export GOOGLE_API_KEY="$LLM_API_KEY"
45
  elif [[ "$LLM_MODEL" == "openai/"* ]]; then
46
  export OPENAI_API_KEY="$LLM_API_KEY"
@@ -50,12 +52,30 @@ elif [[ "$LLM_MODEL" == "moonshot/"* ]]; then
50
  export MOONSHOT_API_KEY="$LLM_API_KEY"
51
  elif [[ "$LLM_MODEL" == "minimax/"* ]]; then
52
  export MINIMAX_API_KEY="$LLM_API_KEY"
53
- elif [[ "$LLM_MODEL" == "mistral/"* ]]; then
54
  export MISTRAL_API_KEY="$LLM_API_KEY"
55
  elif [[ "$LLM_MODEL" == "cohere/"* ]]; then
56
  export COHERE_API_KEY="$LLM_API_KEY"
57
  elif [[ "$LLM_MODEL" == "groq/"* ]]; then
58
  export GROQ_API_KEY="$LLM_API_KEY"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  else
60
  # Default to Anthropic for claude/* or anthropic/* models
61
  export ANTHROPIC_API_KEY="$LLM_API_KEY"
 
40
  fi
41
 
42
  # Auto-detect and set provider-specific API key from model name
43
+ if [[ "$LLM_MODEL" == "openrouter/"* ]]; then
44
+ export OPENROUTER_API_KEY="$LLM_API_KEY"
45
+ elif [[ "$LLM_MODEL" == "google/"* ]]; then
46
  export GOOGLE_API_KEY="$LLM_API_KEY"
47
  elif [[ "$LLM_MODEL" == "openai/"* ]]; then
48
  export OPENAI_API_KEY="$LLM_API_KEY"
 
52
  export MOONSHOT_API_KEY="$LLM_API_KEY"
53
  elif [[ "$LLM_MODEL" == "minimax/"* ]]; then
54
  export MINIMAX_API_KEY="$LLM_API_KEY"
55
+ elif [[ "$LLM_MODEL" == "mistral/"* ]] || [[ "$LLM_MODEL" == "mistralai/"* ]]; then
56
  export MISTRAL_API_KEY="$LLM_API_KEY"
57
  elif [[ "$LLM_MODEL" == "cohere/"* ]]; then
58
  export COHERE_API_KEY="$LLM_API_KEY"
59
  elif [[ "$LLM_MODEL" == "groq/"* ]]; then
60
  export GROQ_API_KEY="$LLM_API_KEY"
61
+ elif [[ "$LLM_MODEL" == "qwen/"* ]]; then
62
+ export QWEN_API_KEY="$LLM_API_KEY"
63
+ elif [[ "$LLM_MODEL" == "x-ai/"* ]] || [[ "$LLM_MODEL" == "xai/"* ]]; then
64
+ export XAI_API_KEY="$LLM_API_KEY"
65
+ elif [[ "$LLM_MODEL" == "nvidia/"* ]]; then
66
+ export NVIDIA_API_KEY="$LLM_API_KEY"
67
+ elif [[ "$LLM_MODEL" == "reka/"* ]]; then
68
+ export REKA_API_KEY="$LLM_API_KEY"
69
+ elif [[ "$LLM_MODEL" == "bytedance/"* ]] || [[ "$LLM_MODEL" == "seed/"* ]]; then
70
+ export BYTEDANCE_API_KEY="$LLM_API_KEY"
71
+ elif [[ "$LLM_MODEL" == "kwaipilot/"* ]]; then
72
+ export KWAIPILOT_API_KEY="$LLM_API_KEY"
73
+ elif [[ "$LLM_MODEL" == "z-ai/"* ]]; then
74
+ export ZAI_API_KEY="$LLM_API_KEY"
75
+ elif [[ "$LLM_MODEL" == "inception/"* ]]; then
76
+ export INCEPTION_API_KEY="$LLM_API_KEY"
77
+ elif [[ "$LLM_MODEL" == "xiaomi/"* ]]; then
78
+ export XIAOMI_API_KEY="$LLM_API_KEY"
79
  else
80
  # Default to Anthropic for claude/* or anthropic/* models
81
  export ANTHROPIC_API_KEY="$LLM_API_KEY"