Transformers
GGUF
conversational
milkowski commited on
Commit
b27e8da
·
verified ·
1 Parent(s): a5fbf7a

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +565 -3
README.md CHANGED
@@ -1,3 +1,565 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - stepfun-ai/step-3.5-flash
5
+ library_name: transformers
6
+ ---
7
+
8
+ ```
9
+ GGUF quantizations of StepFun Step 3.5 Flash model
10
+ llama.cpp version: 7964 (b83111815)
11
+ ```
12
+
13
+ # Step 3.5 Flash
14
+
15
+ <div align="center">
16
+
17
+ <div align="center" style="display: flex; justify-content: center; align-items: center;">
18
+ <img src="stepfun.svg" width="25" style="margin-right: 10px;"/>
19
+ <h1 style="margin: 0; border-bottom: none;">Step 3.5 Flash</h1>
20
+ </div>
21
+
22
+ [![GitHub](https://img.shields.io/badge/GitHub-181717?style=flat&logo=github&logoColor=white)](https://github.com/stepfun-ai/Step-3.5-Flash)
23
+ [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20HF-StepFun/STEP3p5-preview)](https://huggingface.co/stepfun-ai/Step-3.5-Flash)
24
+ [![ModelScope](https://img.shields.io/badge/ModelScope-StepFun/STEP3p5-preview)](https://modelscope.cn/models/stepfun-ai/Step-3.5-Flash)
25
+ [![Discord](https://img.shields.io/badge/Discord-Join-5865F2?logo=discord&logoColor=white)](https://discord.gg/RcMJhNVAQc)
26
+ [![Webpage](https://img.shields.io/badge/Webpage-Blog-blue)](https://static.stepfun.com/blog/step-3.5-flash/)
27
+ [![Paper](https://img.shields.io/badge/Paper-Arxiv-red)](https://github.com/stepfun-ai/Step-3.5-Flash/blob/main/step_3p5_flash_tech_report.pdf)
28
+ [![License](https://img.shields.io/badge/License-Apache%202.0-green)]()
29
+ [![Chat with the model on OpenRouter](https://img.shields.io/badge/Chat%20with%20the%20model-OpenRouter-5B3DF5?logo=chatbot&logoColor=white)](https://openrouter.ai/chat?models=stepfun/step-3.5-flash:free)
30
+ [![Chat with the model on HuggingfaceSpace](https://img.shields.io/badge/Chat%20with%20the%20model-HuggingfaceSpace-5B3DF5?logo=chatbot&logoColor=white)](https://huggingface.co/spaces/stepfun-ai/Step-3.5-Flash)
31
+ </div>
32
+
33
+
34
+
35
+ ## 1. Introduction
36
+
37
+ **Step 3.5 Flash** ([visit website](https://static.stepfun.com/blog/step-3.5-flash/)) is our most capable open-source foundation model, engineered to deliver frontier reasoning and agentic capabilities with exceptional efficiency. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token. This "intelligence density" allows it to rival the reasoning depth of top-tier proprietary models, while maintaining the agility required for real-time interaction.
38
+
39
+ ## 2. Key Capabilities
40
+
41
+ - **Deep Reasoning at Speed**: While chatbots are built for reading, agents must reason fast. Powered by 3-way Multi-Token Prediction (MTP-3), Step 3.5 Flash achieves a generation throughput of **100–300 tok/s** in typical usage (peaking at **350 tok/s** for single-stream coding tasks). This allows for complex, multi-step reasoning chains with immediate responsiveness.
42
+
43
+ - **A Robust Engine for Coding & Agents**: Step 3.5 Flash is purpose-built for agentic tasks, integrating a scalable RL framework that drives consistent self-improvement. It achieves **74.4% on SWE-bench Verified** and **51.0% on Terminal-Bench 2.0**, proving its ability to handle sophisticated, long-horizon tasks with unwavering stability.
44
+
45
+ - **Efficient Long Context**: The model supports a cost-efficient **256K context window** by employing a 3:1 Sliding Window Attention (SWA) ratio—integrating three SWA layers for every full-attention layer. This hybrid approach ensures consistent performance across massive datasets or long codebases while significantly reducing the computational overhead typical of standard long-context models.
46
+
47
+ - **Accessible Local Deployment**: Optimized for accessibility, Step 3.5 Flash brings elite-level intelligence to local environments. It runs securely on high-end consumer hardware (e.g., Mac Studio M4 Max, NVIDIA DGX Spark), ensuring data privacy without sacrificing performance.
48
+
49
+ ## 3. Performance
50
+
51
+ Step 3.5 Flash delivers performance parity with leading closed-source systems while remaining open and efficient.
52
+
53
+ ![](step-bar-chart.png)
54
+
55
+ Performance of Step 3.5 Flash measured across **Reasoning**, **Coding**, and **Agentic Abilities**. Open-source models (left) are sorted by their total parameter count, while top-tier proprietary models are shown on the right. xbench-DeepSearch scores are sourced from [official publications](https://xbench.org/agi/aisearch) for consistency. The shadowed bars represent the enhanced performance of Step 3.5 Flash using [Parallel Thinking](https://arxiv.org/pdf/2601.05593).
56
+
57
+ ### Detailed Benchmarks
58
+
59
+ | Benchmark | Step 3.5 Flash | DeepSeek V3.2 | Kimi K2 Thinking / K2.5 | GLM-4.7 | MiniMax M2.1 | MiMo-V2 Flash |
60
+ | --- | --- | --- | --- | --- | --- | --- |
61
+ | # Activated Params | 11B | 37B | 32B | 32B | 10B | 15B |
62
+ | # Total Params (MoE) | 196B | 671B | 1T | 355B | 230B | 309B |
63
+ | Est. decoding cost @ 128K context, Hopper GPU** | **1.0x**<br>100 tok/s, MTP-3, EP8 | **6.0x**<br>33 tok/s, MTP-1, EP32 | **18.9x**<br>33 tok/s, no MTP, EP32 | **18.9x**<br>100 tok/s, MTP-3, EP8 | **3.9x**<br>100 tok/s, MTP-3, EP8 | **1.2x**<br>100 tok/s, MTP-3, EP8 |
64
+ | | | | **Agent** | | | |
65
+ | τ²-Bench | 88.2 | 80.3 (85.2*) | 74.3*/85.4* | 87.4 | 86.6* | 80.3 (84.1*) |
66
+ | BrowseComp | 51.6 | 51.4 | 41.5* / 60.6 | 52.0 | 47.4 | 45.4 |
67
+ | BrowseComp (w/ Context Manager) | 69.0 | 67.6 | 60.2/74.9 | 67.5 | 62.0 | 58.3 |
68
+ | BrowseComp-ZH | 66.9 | 65.0 | 62.3 / 62.3* | 66.6 | 47.8* | 51.2* |
69
+ | BrowseComp-ZH (w/ Context Manager) | 73.7 | — | —/— | — | — | — |
70
+ | GAIA (no file) | 84.5 | 75.1* | 75.6*/75.9* | 61.9* | 64.3* | 78.2* |
71
+ | xbench-DeepSearch (2025.05) | 83.7 | 78.0* | 76.0*/76.7* | 72.0* | 68.7* | 69.3* |
72
+ | xbench-DeepSearch (2025.10) | 56.3 | 55.7* | —/40+ | 52.3* | 43.0* | 44.0* |
73
+ | ResearchRubrics | 65.3 | 55.8* | 56.2*/59.5* | 62.0* | 60.2* | 54.3* |
74
+ | | | | **Reasoning** | | | |
75
+ | AIME 2025 | 97.3 | 93.1 | 94.5/96.1 | 95.7 | 83.0 | 94.1 (95.1*) |
76
+ | HMMT 2025 (Feb.) | 98.4 | 92.5 | 89.4/95.4 | 97.1 | 71.0* | 84.4 (95.4*) |
77
+ | HMMT 2025 (Nov.) | 94.0 | 90.2 | 89.2*/— | 93.5 | 74.3* | 91.0* |
78
+ | IMOAnswerBench | 85.4 | 78.3 | 78.6/81.8 | 82.0 | 60.4* | 80.9* |
79
+ | | | | **Coding** | | | |
80
+ | LiveCodeBench-V6 | 86.4 | 83.3 | 83.1/85.0 | 84.9 | — | 80.6 (81.6*) |
81
+ | SWE-bench Verified | 74.4 | 73.1 | 71.3/76.8 | 73.8 | 74.0 | 73.4 |
82
+ | Terminal-Bench 2.0 | 51.0 | 46.4 | 35.7*/50.8 | 41.0 | 47.9 | 38.5 |
83
+
84
+ **Notes**:
85
+ 1. "—" indicates the score is not publicly available or not tested.
86
+ 2. "*" indicates the original score was inaccessible or lower than our reproduced, so we report the evaluation under the same test conditions as Step 3.5 Flash to ensure fair comparability.
87
+ 3. **BrowseComp (with Context Manager)**: When the effective context length exceeds a predefined threshold, the agent resets the context and restarts the agent loop. By contrast, Kimi K2.5 and DeepSeek-V3.2 used a "discard-all" strategy.
88
+ 4. **Decoding Cost**: Estimates are based on a methodology similar to, but more accurate than, the approach described arxiv.org/abs/2507.19427
89
+
90
+ ## 4. Architecture Details
91
+
92
+ Step 3.5 Flash is built on a **Sparse Mixture-of-Experts (MoE)** transformer architecture, optimized for high throughput and low VRAM usage during inference.
93
+
94
+ ### 4.1 Technical Specifications
95
+
96
+ | Component | Specification |
97
+ | :--- | :--- |
98
+ | **Backbone** | 45-layer Transformer (4,096 hidden dim) |
99
+ | **Context Window** | 256K |
100
+ | **Vocabulary** | 128,896 tokens |
101
+ | **Total Parameters** | **196.81B** (196B Backbone + 0.81B Head) |
102
+ | **Active Parameters** | **~11B** (per token generation) |
103
+
104
+ ### 4.2 Mixture of Experts (MoE) Routing
105
+
106
+ Unlike traditional dense models, Step 3.5 Flash uses a fine-grained routing strategy to maximize efficiency:
107
+ - **Fine-Grained Experts**: 288 routed experts per layer + 1 shared expert (always active).
108
+ - **Sparse Activation**: Only the Top-8 experts are selected per token.
109
+ - **Result**: The model retains the "memory" of a 196B parameter model but executes with the speed of an 11B model.
110
+
111
+ ### 4.3 Multi-Token Prediction (MTP)
112
+
113
+ To improve inference speed, we utilize a specialized MTP Head consisting of a sliding-window attention mechanism and a dense Feed-Forward Network (FFN). This module predicts 4 tokens simultaneously in a single forward pass, significantly accelerating inference without degrading quality.
114
+
115
+ ## 5. Quick Start
116
+
117
+ You can get started with Step 3.5 Flash in minutes using Cloud API via our supported providers.
118
+
119
+ ### 5.1 Get Your API Key.
120
+
121
+ Sign up at [OpenRouter](https://openrouter.ai) or [platform.stepfun.ai](https://platform.stepfun.ai), and grab your API key.
122
+
123
+ > OpenRouter now offers free trial for Step 3.5 Flash.
124
+
125
+ | Provider | Website | Base URL |
126
+ | :--- | :--- | :--- |
127
+ | OpenRouter | https://openrouter.ai | https://openrouter.ai/api/v1 |
128
+ | StepFun | https://platform.stepfun.ai | https://api.stepfun.ai/v1 |
129
+
130
+ ### 5.2 Setup
131
+
132
+ Install the standard OpenAI SDK (compatible with both platforms).
133
+
134
+ ```bash
135
+ pip install --upgrade "openai>=1.0"
136
+ ```
137
+
138
+ Note: OpenRouter supports multiple SDKs. Learn more [here](https://openrouter.ai/docs/quickstart).
139
+
140
+ ### 5.3 Implementation Example
141
+
142
+ This example shows starting a chat with Step 3.5 Flash.
143
+
144
+ ```python
145
+ from openai import OpenAI
146
+
147
+ client = OpenAI(
148
+ api_key="YOUR_API_KEY",
149
+ base_url="https://api.stepfun.ai/v1", # or "https://openrouter.ai/api/v1"
150
+ # Optional: OpenRouter headers for app rankings
151
+ default_headers={
152
+ "HTTP-Referer": "<YOUR_SITE_URL>",
153
+ "X-Title": "<YOUR_SITE_NAME>",
154
+ }
155
+ )
156
+
157
+ completion = client.chat.completions.create(
158
+ model="step-3.5-flash", # Use "stepfun/step-3.5-flash" for OpenRouter
159
+ messages=[
160
+ {
161
+ "role": "system",
162
+ "content": "You are an AI chat assistant provided by StepFun. You are good at Chinese, English, and many other languages.",
163
+ },
164
+ {
165
+ "role": "user",
166
+ "content": "Introduce StepFun's artificial intelligence capabilities."
167
+ },
168
+ ],
169
+ )
170
+
171
+ print(completion.choices[0].message.content)
172
+ ```
173
+
174
+ ## 6. Local Deployment
175
+
176
+ Step 3.5 Flash is optimized for local inference and supports industry-standard backends including vLLM, SGLang, Hugging Face Transformers and llama.cpp.
177
+
178
+ ### 6.1 vLLM
179
+ We recommend using the latest nightly build of vLLM.
180
+ 1. Install vLLM.
181
+
182
+ ```bash
183
+ # via Docker
184
+ docker pull vllm/vllm-openai:nightly
185
+
186
+ # or via pip (nightly wheels)
187
+ pip install -U vllm --pre \
188
+ --index-url https://pypi.org/simple \
189
+ --extra-index-url https://wheels.vllm.ai/nightly
190
+ ```
191
+ 2. Launch the server.
192
+
193
+ **Note**: Full MTP3 support is not yet available in vLLM. We are actively working on a Pull Request to integrate this feature, which is expected to significantly enhance decoding performance.
194
+
195
+ - For fp8 model
196
+ ```bash
197
+ vllm serve <MODEL_PATH_OR_HF_ID> \
198
+ --served-model-name step3p5-flash \
199
+ --tensor-parallel-size 8 \
200
+ --enable-expert-parallel \
201
+ --disable-cascade-attn \
202
+ --reasoning-parser step3p5 \
203
+ --enable-auto-tool-choice \
204
+ --tool-call-parser step3p5 \
205
+ --hf-overrides '{"num_nextn_predict_layers": 1}' \
206
+ --speculative_config '{"method": "step3p5_mtp", "num_speculative_tokens": 1}' \
207
+ --trust-remote-code \
208
+ --quantization fp8
209
+ ```
210
+
211
+ - For bf16 model
212
+ ```bash
213
+ vllm serve <MODEL_PATH_OR_HF_ID> \
214
+ --served-model-name step3p5-flash \
215
+ --tensor-parallel-size 8 \
216
+ --enable-expert-parallel \
217
+ --disable-cascade-attn \
218
+ --reasoning-parser step3p5 \
219
+ --enable-auto-tool-choice \
220
+ --tool-call-parser step3p5 \
221
+ --hf-overrides '{"num_nextn_predict_layers": 1}' \
222
+ --speculative_config '{"method": "step3p5_mtp", "num_speculative_tokens": 1}' \
223
+ --trust-remote-code
224
+ ```
225
+ You can also refer to the [Step-3.5-Flash](https://github.com/vllm-project/recipes/blob/main/StepFun/Step-3.5-Flash.md) recipe.
226
+
227
+ ### 6.2 SGLang
228
+
229
+ 1. Install SGLang.
230
+ ```bash
231
+ # via Docker
232
+ docker pull lmsysorg/sglang:dev-pr-18084
233
+ # or from source (pip)
234
+ pip install "sglang[all] @ git+https://github.com/sgl-project/sglang.git"
235
+ ```
236
+
237
+ 2. Launch the server.
238
+ - For bf16 model
239
+
240
+ ```bash
241
+ sglang serve --model-path <MODEL_PATH_OR_HF_ID> \
242
+ --served-model-name step3p5-flash \
243
+ --tp-size 8 \
244
+ --tool-call-parser step3p5 \
245
+ --reasoning-parser step3p5 \
246
+ --speculative-algorithm EAGLE \
247
+ --speculative-num-steps 3 \
248
+ --speculative-eagle-topk 1 \
249
+ --speculative-num-draft-tokens 4 \
250
+ --enable-multi-layer-eagle \
251
+ --host 0.0.0.0 \
252
+ --port 8000
253
+ ```
254
+ - For fp8 model
255
+ ```bash
256
+ sglang serve --model-path <MODEL_PATH_OR_HF_ID> \
257
+ --served-model-name step3p5-flash \
258
+ --tp-size 8 \
259
+ --ep-size 8 \
260
+ --tool-call-parser step3p5 \
261
+ --reasoning-parser step3p5 \
262
+ --speculative-algorithm EAGLE \
263
+ --speculative-num-steps 3 \
264
+ --speculative-eagle-topk 1 \
265
+ --speculative-num-draft-tokens 4 \
266
+ --enable-multi-layer-eagle \
267
+ --host 0.0.0.0 \
268
+ --port 8000
269
+ ```
270
+
271
+ ### 6.3 Transformers (Debug / Verification)
272
+
273
+ Use this snippet for quick functional verification. For high-throughput serving, use vLLM or SGLang.
274
+ ```python
275
+ from transformers import AutoModelForCausalLM, AutoTokenizer
276
+
277
+ MODEL_PATH = "<MODEL_PATH_OR_HF_ID>"
278
+
279
+ # 1. Setup
280
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
281
+ model = AutoModelForCausalLM.from_pretrained(
282
+ MODEL_PATH,
283
+ trust_remote_code=True,
284
+ torch_dtype="auto",
285
+ device_map="auto",
286
+ )
287
+
288
+ # 2. Prepare Input
289
+ messages = [{"role": "user", "content": "Explain the significance of the number 42."}]
290
+ inputs = tokenizer.apply_chat_template(
291
+ messages,
292
+ tokenize=True,
293
+ add_generation_prompt=True,
294
+ return_dict=True,
295
+ return_tensors="pt",
296
+ ).to(model.device)
297
+
298
+ # 3. Generate
299
+ generated_ids = model.generate(**inputs, max_new_tokens=128, do_sample=False)
300
+ output_text = tokenizer.decode(generated_ids[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
301
+
302
+ print(output_text)
303
+ ```
304
+
305
+ ### 6.4 llama.cpp
306
+
307
+ #### System Requirements
308
+ - GGUF Model Weights(int4): 111.5 GB
309
+ - Runtime Overhead: ~7 GB
310
+ - Minimum VRAM: 120 GB (e.g., Mac studio, DGX-Spark, AMD Ryzen AI Max+ 395)
311
+ - Recommended: 128GB unified memory
312
+ #### Steps
313
+ 1. Use llama.cpp:
314
+ ```bash
315
+ git clone git@github.com:stepfun-ai/Step-3.5-Flash.git
316
+ cd Step-3.5-Flash/llama.cpp
317
+ ```
318
+ 2. Build llama.cpp on Mac:
319
+ ```bash
320
+ cmake -S . -B build-macos \
321
+ -DCMAKE_BUILD_TYPE=Release \
322
+ -DGGML_METAL=ON \
323
+ -DGGML_ACCELERATE=ON \
324
+ -DLLAMA_BUILD_EXAMPLES=ON \
325
+ -DLLAMA_BUILD_COMMON=ON \
326
+ -DGGML_LTO=ON
327
+ cmake --build build-macos -j8
328
+ ```
329
+ 3. Build llama.cpp on DGX-Spark:
330
+ ```bash
331
+ cmake -S . -B build-cuda \
332
+ -DCMAKE_BUILD_TYPE=Release \
333
+ -DGGML_CUDA=ON \
334
+ -DGGML_CUDA_GRAPHS=ON \
335
+ -DLLAMA_CURL=OFF \
336
+ -DLLAMA_BUILD_EXAMPLES=ON \
337
+ -DLLAMA_BUILD_COMMON=ON
338
+ cmake --build build-cuda -j8
339
+ ```
340
+ 4. Build llama.cpp on AMD Windows:
341
+ ```bash
342
+ cmake -S . -B build-vulkan \
343
+ -DCMAKE_BUILD_TYPE=Release \
344
+ -DLLAMA_CURL=OFF \
345
+ -DGGML_OPENMP=ON \
346
+ -DGGML_VULKAN=ON
347
+ cmake --build build-vulkan -j8
348
+ ```
349
+ 5. Run with llama-cli
350
+ ```bash
351
+ ./llama-cli -m step3.5_flash_Q4_K_S.gguf -c 16384 -b 2048 -ub 2048 -fa on --temp 1.0 -p "What's your name?"
352
+ ```
353
+ 6. Test performance with llama-batched-bench:
354
+ ```bash
355
+ ./llama-batched-bench -m step3.5_flash_Q4_K_S.gguf -c 32768 -b 2048 -ub 2048 -npp 0,2048,8192,16384,32768 -ntg 128 -npl 1
356
+ ```
357
+
358
+ ## 7. Using Step 3.5 Flash on Agent Platforms
359
+
360
+ ### 7.1 Claude Code & Codex
361
+ It's straightforward to add Step 3.5 Flash to the list of models in most coding environments. See below for the instructions for configuring Claude Code and Codex to use Step 3.5 Flash.
362
+
363
+ #### 7.1.1 Prerequisites
364
+ Sign up at StepFun.ai or OpenRouter and grab an API key, as mentioned in the Quick Start.
365
+
366
+ #### 7.1.2 Environment setup
367
+ Claude Code and Codex rely on Node.js. We recommend installing Node.js version > v20. You can install Node via nvm.
368
+
369
+ **Mac/Linux**:
370
+ ```bash
371
+ # Install nvm on Mac/Linux via curl:
372
+ # Step 1
373
+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
374
+
375
+ # Copy the full command
376
+ export NVM_DIR="$HOME/.nvm"
377
+ [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
378
+ [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
379
+
380
+ # Users in China can set up npm mirror
381
+ config set registry https://registry.npmmirror.com
382
+
383
+ # Step 2
384
+ nvm install v22
385
+
386
+ # Make sure Node.js is installed
387
+ node --version
388
+
389
+ npm --version
390
+ ```
391
+
392
+ **Windows**:
393
+ You can download the installation file (`nvm-setup.exe`) from [https://github.com/coreybutler/nvm-windows/releases](https://github.com/coreybutler/nvm-windows/releases). Follow the instructions to install nvm. Run nvm commands to make sure it is installed.
394
+
395
+ #### 7.1.3 Use Step 3.5 Flash on Claude Code
396
+
397
+ 1. Install Claude Code.
398
+ ```bash
399
+ # install claude code via npm
400
+ npm install -g @anthropic-ai/claude-code
401
+
402
+ # test if the installation is successful
403
+ claude --version
404
+ ```
405
+
406
+ 2. Configure Claude Code.
407
+
408
+ To accommodate diverse workflows in Claude Code, we support both **Anthropic-style** and **OpenAI-style** APIs.
409
+
410
+ **Option A: Anthropic API style**:
411
+
412
+ > If you intend to use the **OpenRouter** API, refer to the OpenRouter integration guide.
413
+
414
+ Step 1: Edit Claude Settings. Update `~/.claude/settings.json`.
415
+ > You only need to modify the fields shown below. Leave the rest of the file unchanged.
416
+
417
+ ```json
418
+ {
419
+ "env": {
420
+ "ANTHROPIC_API_KEY": "API_KEY_from_StepFun",
421
+ "ANTHROPIC_BASE_URL": "https://api.stepfun.ai/"
422
+ },
423
+ "model": "step-3.5-flash"
424
+ }
425
+ ```
426
+ Step 2: Start Claude Code.
427
+
428
+ Save the file, and then start Claude Code. Run `/status` to confirm the model and base URL.
429
+
430
+ ```txt
431
+ ❯ /status
432
+ ─────────────────────────────────────────────────────────────────────────────────
433
+ Settings: Status Config Usage (←/→ or tab to cycle)
434
+
435
+ Version: 2.1.1
436
+ Session name: /rename to add a name
437
+ Session ID: 676dae61-259d-4eef-8c2f-0f1641600553
438
+ cwd: /Users/step-test/
439
+ Auth token: none
440
+ API key: ANTHROPIC_API_KEY
441
+ Anthropic base URL: https://api.stepfun.ai/
442
+
443
+ Model: step-3.5-flash
444
+ Setting sources: User settings
445
+ ```
446
+
447
+ **Option B: OpenAI API style**
448
+
449
+ > Note: OpenAI API style here refers to the `chat/completions/` format.
450
+
451
+ > We recommend using `claude-code-router`. For details, see [https://github.com/musistudio/claude-code-router](https://github.com/musistudio/claude-code-router).
452
+
453
+ After Claude Code is installed, install `claude-code-router` :
454
+
455
+ ```bash
456
+ # install ccr via npm
457
+ npm install -g @musistudio/claude-code-router
458
+
459
+ # validate it is installed
460
+ ccr -v
461
+ ```
462
+
463
+ Add the following configurations to `~/.claude-code-router/config.json`.
464
+
465
+ ```json
466
+ {
467
+ "PORT": 3456,
468
+ "Providers": [
469
+ {
470
+ "name": "stepfun-api",
471
+ "api_base_url": "https://api.stepfun.com/v1/chat/completions",
472
+ "api_key": "StepFun_API_KEY",
473
+ "models": ["step-3.5-flash"],
474
+ "transformer":{
475
+ "step-3.5-flash": { "use": ["OpenAI"]}
476
+ }
477
+ }
478
+ ],
479
+ "Router": {
480
+ "default": "stepfun-api,step-3.5-flash",
481
+ "background": "stepfun-api,step-3.5-flash",
482
+ "think": "stepfun-api,step-3.5-flash",
483
+ "longContext": "stepfun-api,step-3.5-flash",
484
+ "webSearch": "stepfun-api,step-3.5-flash"
485
+ }
486
+ }
487
+ ```
488
+ You can now start Claude Code:
489
+
490
+ ```bash
491
+ # Start Claude
492
+ ccr code
493
+
494
+ # restart ccr if configs are changed
495
+ ccr restart
496
+ ```
497
+
498
+ #### 7.1.4 Use Step 3.5 Flash on Codex
499
+ 1. Install Codex
500
+ ```bash
501
+ # Install codex via npm
502
+ npm install -g @openai/codex
503
+
504
+ # Test if it is installed
505
+ codex --version
506
+ ```
507
+
508
+ 2. Configure Codex
509
+ Add the following settings to `~/.codex/config.toml`, keeping the rest of the settings as they are.
510
+
511
+ ```json
512
+ model="step-3.5-flash"
513
+ model_provider = "stepfun-chat"
514
+ preferred_auth_method = "apikey"
515
+
516
+ # configure the provider
517
+ [model_providers.stepfun-chat]
518
+ name = "OpenAI using response"
519
+ base_url = "https://api.stepfun.com/v1"
520
+ env_key = "OPENAI_API_KEY"
521
+ wire_api = "chat"
522
+ query_params = {}
523
+ ```
524
+
525
+ For Codex, `wire_api` only supports `chat` . If you use the `responses` mode, you'll need to change to `chat`. Please also switch `model_provider` to the newly configured `stepfun-chat`.
526
+
527
+ When finishing the configuration, run codex in a new Terminal window to start Codex. Run `/status` to check the configuration.
528
+
529
+ ```bash
530
+ /status
531
+ 📂 Workspace
532
+ • Path: /Users/step-test/
533
+ • Approval Mode: on-request
534
+ • Sandbox: workspace-write
535
+ • AGENTS files: (none)
536
+
537
+ 🧠 Model
538
+ • Name: step-3.5-flash
539
+ • Provider: Stepfun-chat
540
+
541
+ 💻 Client
542
+ • CLI Version: 0.40.0
543
+ ```
544
+
545
+ #### 7.1.5 Use Step 3.5 Flash on Step-DeepResearch (DeepResearch)
546
+ 1. Use the reference environment setup below and configure `MODEL_NAME` to `Step-3.5-Flash`. [https://github.com/stepfun-ai/StepDeepResearch?tab=readme-ov-file#1-environment-setup](https://github.com/stepfun-ai/StepDeepResearch?tab=readme-ov-file#1-environment-setup)
547
+
548
+
549
+ ## 8. Known Issues and Future Directions
550
+
551
+ 1. **Token Efficiency**. Step 3.5 Flash achieves frontier-level agentic intelligence but currently relies on longer generation trajectories than Gemini 3.0 Pro to reach comparable quality.
552
+ 2. **Efficient Universal Mastery**. We aim to unify generalist versatility with deep domain expertise. To achieve this efficiently, we are advancing variants of on-policy distillation, allowing the model to internalize expert behaviors with higher sample efficiency.
553
+ 3. **RL for More Agentic Tasks**. While Step 3.5 Flash demonstrates competitive performance on academic agentic benchmarks, the next frontier of agentic AI necessitates the application of RL to intricate, expert-level tasks found in professional work, engineering, and research.
554
+ 4. **Operational Scope and Constraints**. Step 3.5 Flash is tailored for coding and work-centric tasks, but may experience reduced stability during distribution shifts. This typically occurs in highly specialized domains or long-horizon, multi-turn dialogues, where the model may exhibit repetitive reasoning, mixed-language outputs, or inconsistencies in time and identity awareness.
555
+
556
+ ## 9. Co-Developing the Future
557
+
558
+ We view our roadmap as a living document, evolving continuously based on real-world usage and developer feedback.
559
+ As we work to shape the future of AGI by expanding broad model capabilities, we want to ensure we are solving the right problems. We invite you to be part of this continuous feedback loop—your insights directly influence our priorities.
560
+
561
+ - **Join the Conversation**: Our Discord community is the primary hub for brainstorming future architectures, proposing capabilities, and getting early access updates 🚀
562
+ - **Report Friction**: Encountering limitations? You can open an issue on GitHub or flag it directly in our Discord support channels.
563
+
564
+ ## License
565
+ This project is open-sourced under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).