Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,678 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
base_model:
|
| 4 |
+
- stepfun-ai/step-3.5-flash
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# Step 3.5 Flash
|
| 8 |
+
|
| 9 |
+
<div align="center">
|
| 10 |
+
|
| 11 |
+
<div align="center" style="display: flex; justify-content: center; align-items: center;">
|
| 12 |
+
<img src="https://huggingface.co/stepfun-ai/Step-3.5-Flash/resolve/main/figures/stepfun.svg" width="25" style="margin-right: 10px;"/>
|
| 13 |
+
<h1 style="margin: 0; border-bottom: none;">Step-3.5-Flash</h1>
|
| 14 |
+
</div>
|
| 15 |
+
|
| 16 |
+
[](https://huggingface.co/stepfun-ai/step3p5_preview/tree/main)
|
| 17 |
+
[](https://huggingface.co/stepfun-ai/step3p5_preview/tree/main)
|
| 18 |
+
[](https://huggingface.co/stepfun-ai/step3p5_preview/tree/main)
|
| 19 |
+
[]()
|
| 20 |
+
|
| 21 |
+
</div>
|
| 22 |
+
|
| 23 |
+
## 1. Introduction
|
| 24 |
+
|
| 25 |
+
**Step 3.5 Flash** is our most capable open-source foundation model, engineered to deliver frontier reasoning and agentic capabilities with exceptional efficiency. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token. This "intelligence density" allows it to rival the reasoning depth of top-tier proprietary models, while maintaining the agility required for real-time interaction.
|
| 26 |
+
|
| 27 |
+
## 2. Key Capabilities
|
| 28 |
+
|
| 29 |
+
- **Deep Reasoning at Speed**: While chatbots are built for reading, agents must reason fast. Powered by 3-way Multi-Token Prediction (MTP-3), Step 3.5 Flash achieves a generation throughput of **100–300 tok/s** in typical usage (peaking at **350 tok/s** for single-stream coding tasks). This allows for complex, multi-step reasoning chains with immediate responsiveness.
|
| 30 |
+
|
| 31 |
+
- **A Robust Engine for Coding & Agents**: Step 3.5 Flash is purpose-built for agentic tasks, integrating a scalable RL framework that drives consistent self-improvement. It achieves **74.4% on SWE-bench Verified** and **51.0% on Terminal-Bench 2.0**, proving its ability to handle sophisticated, long-horizon tasks with unwavering stability.
|
| 32 |
+
|
| 33 |
+
- **Efficient Long Context**: The model supports a cost-efficient **256K context window** by employing a 3:1 Sliding Window Attention (SWA) ratio—integrating three SWA layers for every full-attention layer. This hybrid approach ensures consistent performance across massive datasets or long codebases while significantly reducing the computational overhead typical of standard long-context models.
|
| 34 |
+
|
| 35 |
+
- **Accessible Local Deployment**: Optimized for accessibility, Step 3.5 Flash brings elite-level intelligence to local environments. It runs securely on high-end consumer hardware (e.g., Mac Studio M4 Max, NVIDIA DGX Spark), ensuring data privacy without sacrificing performance.
|
| 36 |
+
|
| 37 |
+
## 3. Performance
|
| 38 |
+
|
| 39 |
+
Step 3.5 Flash delivers performance parity with leading closed-source systems while remaining open and efficient.
|
| 40 |
+
|
| 41 |
+
![]
|
| 42 |
+
|
| 43 |
+
Performance of Step 3.5 Flash measured across **Reasoning**, **Coding**, and **Agency**. Open-source models (left) are sorted by their total parameter count, while top-tier proprietary models are shown on the right. xbench-DeepSearch scores are sourced from [official publications](https://xbench.org/agi/aisearch) for consistency. The shadowed bars represent the enhanced performance of Step 3.5 Flash using [Parallel Thinking](https://arxiv.org/pdf/2601.05593).
|
| 44 |
+
|
| 45 |
+
### Detailed Benchmarks
|
| 46 |
+
|
| 47 |
+
| Benchmark | Step 3.5 Flash | DeepSeek V3.2 | Kimi K2 Thinking / K2.5 | GLM-4.7 | MiniMax M2.1 | MiMo-V2 Flash |
|
| 48 |
+
|---|---|---|---|---|---|---|
|
| 49 |
+
| # Activated Params | 11B | 37B | 32B | 32B | 10B | 15B |
|
| 50 |
+
| # Total Params (MoE) | 196B | 671B | 1T | 355B | 230B | 309B |
|
| 51 |
+
| Est. decoding cost (@ 128K context, Hopper GPU**) | **1.0x** (100 tok/s, MTP-3, EP8) | 6.0x (33 tok/s, MTP-1, EP32) | 18.9x (33 tok/s, no MTP, EP32) | 18.9x (100 tok/s, MTP-3, EP8) | 3.9x (100 tok/s, MTP-3, EP8) | 1.2x (100 tok/s, MTP-3, EP8) |
|
| 52 |
+
| **Agency** | | | | | | |
|
| 53 |
+
| τ²-Bench | **88.2** | 80.3 | 74.3* / — | 87.4 | 80.2* | 80.3 |
|
| 54 |
+
| BrowseComp | 50.7 | 51.4 | 41.5* / **60.6** | 52.0 | 47.4 | 45.4 |
|
| 55 |
+
| BrowseComp (w/ Context Manager) | 69.0 | 67.6 | 60.2 / **74.9** | 67.5 | 62.0 | 58.3 |
|
| 56 |
+
| BrowseComp-ZH | **66.9** | 65.0 | 62.3 / 62.3* | 66.6 | 47.8* | 51.2* |
|
| 57 |
+
| BrowseComp-ZH (w/ Context Manager) | **73.7** | — | — / — | — | — | — |
|
| 58 |
+
| GAIA (no file) | **84.5** | 75.1* | 75.6* / 75.9* | 61.9* | 64.3* | 78.2* |
|
| 59 |
+
| xbench-DeepSearch (2025.05) | **83.7** | 78.0* | 76.0* / 76.7* | 72.0* | 68.7* | 69.3* |
|
| 60 |
+
| xbench-DeepSearch (2025.10) | **56.3** | 55.7* | — / 40+ | 52.3* | 43.0* | 44.0* |
|
| 61 |
+
| ResearchRubrics | **65.3** | 55.8* | 56.2* / 59.5* | 62.0* | 60.2* | 54.3* |
|
| 62 |
+
| **Reasoning** | | | | | | |
|
| 63 |
+
| AIME 2025 | **97.3** | 93.1 | 94.5 / 96.1 | 95.7 | 83.0 | 94.1 (95.1*) |
|
| 64 |
+
| HMMT 2025 (Feb.) | **98.4** | 92.5 | 89.4 / 95.4 | 97.1 | 71.0* | 84.4 (95.4*) |
|
| 65 |
+
| HMMT 2025 (Nov.) | **94.0** | 90.2 | 89.2* / — | 93.5 | 74.3* | 91.0* |
|
| 66 |
+
| IMOAnswerBench | **85.4** | 78.3 | 78.6 / 81.8 | 82.0 | 60.4* | 80.9* |
|
| 67 |
+
| **Coding** | | | | | | |
|
| 68 |
+
| LiveCodeBench-V6 | **86.4** | 83.3 | 83.1 / 85.0 | 84.9 | — | 80.6 (81.6*) |
|
| 69 |
+
| SWE-bench Verified | 74.4 | 73.1 | 71.3 / **76.8** | 73.8 | 74.0 | 73.4 |
|
| 70 |
+
| Terminal-Bench 2.0 | **51.0** | 46.4 | 35.7* / 50.8 | 41.0 | 47.9 | 38.5 |
|
| 71 |
+
|
| 72 |
+
**Notes**:
|
| 73 |
+
1. "—" indicates the score is not publicly available or not tested.
|
| 74 |
+
2. "*" indicates the original score was inaccessible or lower than our reproduced, so we report the evaluation under the same test conditions as Step 3.5 Flash to ensure fair comparability.
|
| 75 |
+
3. **BrowseComp (with Context Manager)**: When the effective context length exceeds a predefined threshold, the agent resets the context and restarts the agent loop. By contrast, Kimi K2.5 and DeepSeek-V3.2 used a "discard-all" strategy.
|
| 76 |
+
4. **Decoding Cost**: Estimates are based on a methodology similar to, but more accurate than, the approach described arxiv.org/abs/2507.19427
|
| 77 |
+
|
| 78 |
+
## 4. Architecture Details
|
| 79 |
+
|
| 80 |
+
Step 3.5 Flash is built on a **Sparse Mixture-of-Experts (MoE)** transformer architecture, optimized for high throughput and low VRAM usage during inference.
|
| 81 |
+
|
| 82 |
+
### 4.1 Technical Specifications
|
| 83 |
+
|
| 84 |
+
| Component | Specification |
|
| 85 |
+
| :--- | :--- |
|
| 86 |
+
| **Backbone** | 45-layer Transformer (4,096 hidden dim) |
|
| 87 |
+
| **Context Window** | 256K |
|
| 88 |
+
| **Vocabulary** | 128,896 tokens |
|
| 89 |
+
| **Total Parameters** | **196.81B** (196B Backbone + 0.81B Head) |
|
| 90 |
+
| **Active Parameters** | **~11B** (per token generation) |
|
| 91 |
+
|
| 92 |
+
### 4.2 Mixture of Experts (MoE) Routing
|
| 93 |
+
|
| 94 |
+
Unlike traditional dense models, Step 3.5 Flash uses a fine-grained routing strategy to maximize efficiency:
|
| 95 |
+
- **Fine-Grained Experts**: 288 routed experts per layer + 1 shared expert (always active).
|
| 96 |
+
- **Sparse Activation**: Only the Top-8 experts are selected per token.
|
| 97 |
+
- **Result**: The model retains the "memory" of a 196B parameter model but executes with the speed of an 11B model.
|
| 98 |
+
|
| 99 |
+
### 4.3 Multi-Token Prediction (MTP)
|
| 100 |
+
|
| 101 |
+
To improve inference speed, we utilize a specialized MTP Head consisting of a sliding-window attention mechanism and a dense Feed-Forward Network (FFN). This module predicts 4 tokens simultaneously in a single forward pass, significantly accelerating inference without degrading quality.
|
| 102 |
+
|
| 103 |
+
## 5. Quick Start
|
| 104 |
+
|
| 105 |
+
You can get started with Step 3.5 Flash in minutes using Cloud API via our supported providers.
|
| 106 |
+
|
| 107 |
+
### 5.1 Get Your API Key
|
| 108 |
+
Choose a provider and obtain your credentials. OpenRouter now offers free trial for Step 3.5 Flash.
|
| 109 |
+
|
| 110 |
+
| Provider | API Key Link | Base URL |
|
| 111 |
+
| :--- | :--- | :--- |
|
| 112 |
+
| OpenRouter | https://openrouter.ai/keys | https://openrouter.ai/api/v1 |
|
| 113 |
+
| StepFun | https://platform.stepfun.ai/interface-key | https://api.stepfun.ai/v1 |
|
| 114 |
+
|
| 115 |
+
### 5.2 Setup
|
| 116 |
+
|
| 117 |
+
Install the standard OpenAI SDK (compatible with both platforms).
|
| 118 |
+
|
| 119 |
+
```bash
|
| 120 |
+
pip install --upgrade "openai>=1.0"
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
Note: OpenRouter supports multiple SDKs. Learn more [here](https://openrouter.ai/docs/quickstart).
|
| 124 |
+
|
| 125 |
+
### 5.3 Implementation Example
|
| 126 |
+
|
| 127 |
+
This example shows starting a chat with Step 3.5 Flash.
|
| 128 |
+
|
| 129 |
+
```python
|
| 130 |
+
from openai import OpenAI
|
| 131 |
+
|
| 132 |
+
client = OpenAI(
|
| 133 |
+
api_key="YOUR_API_KEY",
|
| 134 |
+
base_url="https://api.stepfun.ai/v1", # or "https://openrouter.ai/api/v1"
|
| 135 |
+
# Optional: OpenRouter headers for app rankings
|
| 136 |
+
default_headers={
|
| 137 |
+
"HTTP-Referer": "<YOUR_SITE_URL>",
|
| 138 |
+
"X-Title": "<YOUR_SITE_NAME>",
|
| 139 |
+
}
|
| 140 |
+
)
|
| 141 |
+
|
| 142 |
+
completion = client.chat.completions.create(
|
| 143 |
+
model="step-3.5-flash", # Use "stepfun/step-3.5-flash" for OpenRouter
|
| 144 |
+
messages=[
|
| 145 |
+
{
|
| 146 |
+
"role": "system",
|
| 147 |
+
"content": "You are an AI chat assistant provided by StepFun. You are good at Chinese, English, and many other languages.",
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"role": "user",
|
| 151 |
+
"content": "Introduce StepFun's artificial intelligence capabilities."
|
| 152 |
+
},
|
| 153 |
+
],
|
| 154 |
+
)
|
| 155 |
+
|
| 156 |
+
print(completion.choices[0].message.content)
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
## 6. Local Deployment
|
| 160 |
+
|
| 161 |
+
Step 3.5 Flash is optimized for local inference and supports industry-standard backends including vLLM, SGLang, Hugging Face Transformers and llama.cpp.
|
| 162 |
+
|
| 163 |
+
### 6.1 vLLM
|
| 164 |
+
We recommend using the latest nightly build of vLLM.
|
| 165 |
+
1. Install vLLM.
|
| 166 |
+
|
| 167 |
+
```bash
|
| 168 |
+
# via Docker
|
| 169 |
+
docker pull vllm/vllm-openai:nightly
|
| 170 |
+
|
| 171 |
+
# or via pip (nightly wheels)
|
| 172 |
+
pip install -U vllm --pre \
|
| 173 |
+
--index-url https://pypi.org/simple \
|
| 174 |
+
--extra-index-url https://wheels.vllm.ai/nightly
|
| 175 |
+
```
|
| 176 |
+
2. Launch the server.
|
| 177 |
+
|
| 178 |
+
**Note**: Full MTP3 support is not yet available in vLLM. We are actively working on a Pull Request to integrate this feature, which is expected to significantly enhance decoding performance.
|
| 179 |
+
|
| 180 |
+
- For fp8 model
|
| 181 |
+
```bash
|
| 182 |
+
vllm serve <MODEL_PATH_OR_HF_ID> \
|
| 183 |
+
--served-model-name step3p5-flash \
|
| 184 |
+
--tensor-parallel-size 8 \
|
| 185 |
+
--enable-expert-parallel \
|
| 186 |
+
--disable-cascade-attn \
|
| 187 |
+
--reasoning-parser step3p5 \
|
| 188 |
+
--enable-auto-tool-choice \
|
| 189 |
+
--tool-call-parser step3p5 \
|
| 190 |
+
--hf-overrides '{"num_nextn_predict_layers": 1}' \
|
| 191 |
+
--speculative_config '{"method": "step3p5_mtp", "num_speculative_tokens": 1}' \
|
| 192 |
+
--trust-remote-code \
|
| 193 |
+
--quantization fp8
|
| 194 |
+
```
|
| 195 |
+
|
| 196 |
+
- For bf16 model
|
| 197 |
+
```bash
|
| 198 |
+
vllm serve <MODEL_PATH_OR_HF_ID> \
|
| 199 |
+
--served-model-name step3p5-flash \
|
| 200 |
+
--tensor-parallel-size 8 \
|
| 201 |
+
--enable-expert-parallel \
|
| 202 |
+
--disable-cascade-attn \
|
| 203 |
+
--reasoning-parser step3p5 \
|
| 204 |
+
--enable-auto-tool-choice \
|
| 205 |
+
--tool-call-parser step3p5 \
|
| 206 |
+
--hf-overrides '{"num_nextn_predict_layers": 1}' \
|
| 207 |
+
--speculative_config '{"method": "step3p5_mtp", "num_speculative_tokens": 1}' \
|
| 208 |
+
--trust-remote-code
|
| 209 |
+
```
|
| 210 |
+
|
| 211 |
+
### 6.2 SGLang
|
| 212 |
+
|
| 213 |
+
1. Install SGLang.
|
| 214 |
+
```bash
|
| 215 |
+
# via Docker
|
| 216 |
+
docker pull lmsysorg/sglang:latest
|
| 217 |
+
# or from source (pip)
|
| 218 |
+
pip install "sglang[all] @ git+https://github.com/sgl-project/sglang.git"
|
| 219 |
+
```
|
| 220 |
+
|
| 221 |
+
2. Launch the server.
|
| 222 |
+
- For bf16 model
|
| 223 |
+
SGLANG_ENABLE_SPEC_V2=1
|
| 224 |
+
python3 -m sglang.launch_server \
|
| 225 |
+
--model-path <MODEL_PATH_OR_HF_ID> \
|
| 226 |
+
--served-model-name step3p5-flash \
|
| 227 |
+
--tp-size 8 \
|
| 228 |
+
--tool-call-parser step3p5 \
|
| 229 |
+
--reasoning-parser step3p5 \
|
| 230 |
+
--speculative-algorithm EAGLE \
|
| 231 |
+
--speculative-num-steps 3 \
|
| 232 |
+
--speculative-eagle-topk 1 \
|
| 233 |
+
--speculative-num-draft-tokens 4 \
|
| 234 |
+
--enable-multi-layer-eagle \
|
| 235 |
+
--host 0.0.0.0 \
|
| 236 |
+
--port 8000
|
| 237 |
+
```
|
| 238 |
+
- For fp8 model
|
| 239 |
+
```bash
|
| 240 |
+
SGLANG_ENABLE_SPEC_V2=1
|
| 241 |
+
python3 -m sglang.launch_server \
|
| 242 |
+
--model-path <MODEL_PATH_OR_HF_ID> \
|
| 243 |
+
--served-model-name step3p5-flash \
|
| 244 |
+
--tp-size 8 \
|
| 245 |
+
--ep-size 8 \
|
| 246 |
+
--tool-call-parser step3p5 \
|
| 247 |
+
--reasoning-parser step3p5 \
|
| 248 |
+
--speculative-algorithm EAGLE \
|
| 249 |
+
--speculative-num-steps 3 \
|
| 250 |
+
--speculative-eagle-topk 1 \
|
| 251 |
+
--speculative-num-draft-tokens 4 \
|
| 252 |
+
--enable-multi-layer-eagle \
|
| 253 |
+
--host 0.0.0.0 \
|
| 254 |
+
--port 8000
|
| 255 |
+
```
|
| 256 |
+
|
| 257 |
+
### 6.3 Transformers (Debug / Verification)
|
| 258 |
+
|
| 259 |
+
Use this snippet for quick functional verification. For high-throughput serving, use vLLM or SGLang.
|
| 260 |
+
```python
|
| 261 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 262 |
+
|
| 263 |
+
MODEL_PATH = "<MODEL_PATH_OR_HF_ID>"
|
| 264 |
+
|
| 265 |
+
# 1. Setup
|
| 266 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
| 267 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 268 |
+
MODEL_PATH,
|
| 269 |
+
trust_remote_code=True,
|
| 270 |
+
torch_dtype="auto",
|
| 271 |
+
device_map="auto",
|
| 272 |
+
)
|
| 273 |
+
|
| 274 |
+
# 2. Prepare Input
|
| 275 |
+
messages = [{"role": "user", "content": "Explain the significance of the number 42."}]
|
| 276 |
+
inputs = tokenizer.apply_chat_template(
|
| 277 |
+
messages,
|
| 278 |
+
tokenize=True,
|
| 279 |
+
add_generation_prompt=True,
|
| 280 |
+
return_dict=True,
|
| 281 |
+
return_tensors="pt",
|
| 282 |
+
).to(model.device)
|
| 283 |
+
|
| 284 |
+
# 3. Generate
|
| 285 |
+
generated_ids = model.generate(**inputs, max_new_tokens=128, do_sample=False)
|
| 286 |
+
output_text = tokenizer.decode(generated_ids[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
|
| 287 |
+
|
| 288 |
+
print(output_text)
|
| 289 |
+
```
|
| 290 |
+
|
| 291 |
+
### 6.4 llama.cpp
|
| 292 |
+
|
| 293 |
+
#### System Requirements
|
| 294 |
+
- GGUF Model Weights(int4): 111.5 GB
|
| 295 |
+
- Runtime Overhead: ~7 GB
|
| 296 |
+
- Minimum VRAM: 120 GB (e.g., Mac studio, DGX-Spark, AMD Ryzen AI Max+ 395)
|
| 297 |
+
- Recommended: 128GB unified memory
|
| 298 |
+
#### Steps
|
| 299 |
+
1. Clone llama.cpp and checkout to step3.5 branch:
|
| 300 |
+
```bash
|
| 301 |
+
git clone https://github.com/stepfun-ai/Step-3.5-Flash.git
|
| 302 |
+
cd llama.cpp
|
| 303 |
+
git checkout feature/step3.5-flash
|
| 304 |
+
```
|
| 305 |
+
2. Build llama.cpp on Mac:
|
| 306 |
+
```bash
|
| 307 |
+
cmake -S . -B build-macos \
|
| 308 |
+
-DCMAKE_BUILD_TYPE=Release \
|
| 309 |
+
-DGGML_METAL=ON \
|
| 310 |
+
-DGGML_ACCELERATE=ON \
|
| 311 |
+
-DLLAMA_BUILD_EXAMPLES=ON \
|
| 312 |
+
-DLLAMA_BUILD_COMMON=ON \
|
| 313 |
+
-DGGML_LTO=ON
|
| 314 |
+
cmake --build build-macos -j8
|
| 315 |
+
```
|
| 316 |
+
3. Build llama.cpp on DGX-Spark:
|
| 317 |
+
```bash
|
| 318 |
+
cmake -S . -B build-cuda \
|
| 319 |
+
-DCMAKE_BUILD_TYPE=Release \
|
| 320 |
+
-DGGML_CUDA=ON \
|
| 321 |
+
-DGGML_CUDA_GRAPHS=ON \
|
| 322 |
+
-DLLAMA_CURL=OFF \
|
| 323 |
+
-DLLAMA_BUILD_EXAMPLES=ON \
|
| 324 |
+
-DLLAMA_BUILD_COMMON=ON
|
| 325 |
+
cmake --build build-cuda -j8
|
| 326 |
+
```
|
| 327 |
+
4. Build llama.cpp on AMD Windows:
|
| 328 |
+
```bash
|
| 329 |
+
cmake -S . -B build-vulkan \
|
| 330 |
+
-DCMAKE_BUILD_TYPE=Release \
|
| 331 |
+
-DLLAMA_CURL=OFF \
|
| 332 |
+
-DGGML_OPENMP=ON \
|
| 333 |
+
-DGGML_VULKAN=ON
|
| 334 |
+
cmake --build build-vulkan -j8
|
| 335 |
+
```
|
| 336 |
+
5. Run with llama-cli
|
| 337 |
+
```bash
|
| 338 |
+
./llama-cli -m step3.5_flash_Q4_K_S.gguf -c 16384 -b 2048 -ub 2048 -fa on --temp 1.0 -p "What's your name?"
|
| 339 |
+
```
|
| 340 |
+
6. Test performance with llama-batched-bench:
|
| 341 |
+
```bash
|
| 342 |
+
./llama-batched-bench -m step3.5_flash_Q4_K_S.gguf -c 32768 -b 2048 -ub 2048 -npp 0,2048,8192,16384,32768 -ntg 128 -npl 1
|
| 343 |
+
```
|
| 344 |
+
|
| 345 |
+
## 7. Using Step 3.5 Flash on Agent Platforms
|
| 346 |
+
|
| 347 |
+
### 7.1 Claude Code & Codex
|
| 348 |
+
It's straightforward to add Step 3.5 Flash to the list of models in most coding environments. See below for the instructions for configuring Claude Code and Codex to use Step 3.5 Flash.
|
| 349 |
+
|
| 350 |
+
#### 7.1.1 Prerequisites
|
| 351 |
+
Sign up at StepFun.ai or OpenRouter and grab an API key, as mentioned in the Quick Start.
|
| 352 |
+
|
| 353 |
+
#### 7.1.2 Environment setup
|
| 354 |
+
Claude Code and Codex rely on Node.js. We recommend installing Node.js version > v20. You can install Node via nvm.
|
| 355 |
+
|
| 356 |
+
**Mac/Linux**:
|
| 357 |
+
```bash
|
| 358 |
+
# Install nvm on Mac/Linux via curl:
|
| 359 |
+
# Step 1
|
| 360 |
+
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
|
| 361 |
+
|
| 362 |
+
# Copy the full command
|
| 363 |
+
export NVM_DIR="$HOME/.nvm"
|
| 364 |
+
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
|
| 365 |
+
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
|
| 366 |
+
|
| 367 |
+
# Users in China can set up npm mirror
|
| 368 |
+
config set registry https://registry.npmmirror.com
|
| 369 |
+
|
| 370 |
+
# Step 2
|
| 371 |
+
nvm install v22
|
| 372 |
+
|
| 373 |
+
# Make sure Node.js is installed
|
| 374 |
+
node --version
|
| 375 |
+
|
| 376 |
+
npm --version
|
| 377 |
+
```
|
| 378 |
+
|
| 379 |
+
**Windows**:
|
| 380 |
+
You can download the installation file (`nvm-setup.exe`) from [https://github.com/coreybutler/nvm-windows/releases](https://github.com/coreybutler/nvm-windows/releases). Follow the instructions to install nvm. Run nvm commands to make sure it is installed.
|
| 381 |
+
|
| 382 |
+
#### 7.1.3 Use Step 3.5 Flash on Claude Code
|
| 383 |
+
|
| 384 |
+
1. Install Claude Code.
|
| 385 |
+
```bash
|
| 386 |
+
# install claude code via npm
|
| 387 |
+
npm install -g @anthropic-ai/claude-code
|
| 388 |
+
|
| 389 |
+
# test if the installation is successful
|
| 390 |
+
claude --version
|
| 391 |
+
```
|
| 392 |
+
|
| 393 |
+
2. Configure Claude Code.
|
| 394 |
+
|
| 395 |
+
We support the OpenAI and Anthropic API style for integration into Claude Code.
|
| 396 |
+
|
| 397 |
+
Note: OpenAI API style here refers to the `chat/completions/` format.
|
| 398 |
+
|
| 399 |
+
We recommend using `claude-code-router`. For details, see [https://github.com/musistudio/claude-code-router](https://github.com/musistudio/claude-code-router).
|
| 400 |
+
|
| 401 |
+
After Claude Code is installed, install `claude-code-router` :
|
| 402 |
+
|
| 403 |
+
```bash
|
| 404 |
+
# install ccr via npm
|
| 405 |
+
npm install -g @musistudio/claude-code-router
|
| 406 |
+
|
| 407 |
+
# validate it is installed
|
| 408 |
+
ccr -v
|
| 409 |
+
```
|
| 410 |
+
|
| 411 |
+
Add the following configurations to `~/.claude-code-router/config.json`.
|
| 412 |
+
|
| 413 |
+
```json
|
| 414 |
+
{
|
| 415 |
+
"PORT": 3456,
|
| 416 |
+
"Providers": [
|
| 417 |
+
{
|
| 418 |
+
"name": "stepfun-api",
|
| 419 |
+
"api_base_url": "https://api.stepfun.com/v1/chat/completions",
|
| 420 |
+
"api_key": "StepFun_API_KEY",
|
| 421 |
+
"models": ["step-3.5-flash"],
|
| 422 |
+
"transformer":{
|
| 423 |
+
"step-3.5-flash": { "use": ["OpenAI"]}
|
| 424 |
+
}
|
| 425 |
+
}
|
| 426 |
+
],
|
| 427 |
+
"Router": {
|
| 428 |
+
"default": "stepfun-api,step-3.5-flash",
|
| 429 |
+
"background": "stepfun-api,step-3.5-flash",
|
| 430 |
+
"think": "stepfun-api,step-3.5-flash",
|
| 431 |
+
"longContext": "stepfun-api,step-3.5-flash",
|
| 432 |
+
"webSearch": "stepfun-api,step-3.5-flash"
|
| 433 |
+
}
|
| 434 |
+
}
|
| 435 |
+
```
|
| 436 |
+
You can now start Claude Code:
|
| 437 |
+
|
| 438 |
+
```bash
|
| 439 |
+
# Start Claude
|
| 440 |
+
ccr code
|
| 441 |
+
|
| 442 |
+
# restart ccr if configs are changed
|
| 443 |
+
ccr restart
|
| 444 |
+
```
|
| 445 |
+
|
| 446 |
+
#### 7.1.4 Use Step 3.5 Flash on Codex
|
| 447 |
+
1. Install Codex
|
| 448 |
+
```bash
|
| 449 |
+
# Install codex via npm
|
| 450 |
+
npm install -g @openai/codex
|
| 451 |
+
|
| 452 |
+
# Test if it is installed
|
| 453 |
+
codex --version
|
| 454 |
+
```
|
| 455 |
+
|
| 456 |
+
2. Configure Codex
|
| 457 |
+
Add the following settings to `~/.codex/config.toml`, keeping the rest of the settings as they are.
|
| 458 |
+
|
| 459 |
+
```json
|
| 460 |
+
model="step-3.5-flash"
|
| 461 |
+
model_provider = "stepfun-chat"
|
| 462 |
+
preferred_auth_method = "apikey"
|
| 463 |
+
|
| 464 |
+
# configure the provider
|
| 465 |
+
[model_providers.stepfun-chat]
|
| 466 |
+
name = "OpenAI using response"
|
| 467 |
+
base_url = "https://api.stepfun.com/v1"
|
| 468 |
+
env_key = "OPENAI_API_KEY"
|
| 469 |
+
wire_api = "chat"
|
| 470 |
+
query_params = {}
|
| 471 |
+
```
|
| 472 |
+
|
| 473 |
+
For Codex, `wire_api` only supports `chat` . If you use the `responses` mode, you'll need to change to `chat`. Please also switch `model_provider` to the newly configured `stepfun-chat`.
|
| 474 |
+
|
| 475 |
+
When finishing the configuration, run codex in a new Terminal window to start Codex. Run `/status` to check the configuration.
|
| 476 |
+
|
| 477 |
+
```bash
|
| 478 |
+
/status
|
| 479 |
+
📂 Workspace
|
| 480 |
+
• Path: /Users/step-test/
|
| 481 |
+
• Approval Mode: on-request
|
| 482 |
+
• Sandbox: workspace-write
|
| 483 |
+
• AGENTS files: (none)
|
| 484 |
+
|
| 485 |
+
🧠 Model
|
| 486 |
+
• Name: step-3.5-flash
|
| 487 |
+
• Provider: Stepfun-chat
|
| 488 |
+
|
| 489 |
+
💻 Client
|
| 490 |
+
• CLI Version: 0.40.0
|
| 491 |
+
```
|
| 492 |
+
|
| 493 |
+
#### 7.1.5 Use Step 3.5 Flash on Step-DeepResearch (DeepResearch)
|
| 494 |
+
1. Use the reference environment setup below and configure `MODEL_NAME` to `Step-3.5-Flash`. [https://github.com/stepfun-ai/StepDeepResearch?tab=readme-ov-file#1-environment-setup](https://github.com/stepfun-ai/StepDeepResearch?tab=readme-ov-file#1-environment-setup)
|
| 495 |
+
|
| 496 |
+
|
| 497 |
+
## 8. Limitations, Known Issues and Future Directions
|
| 498 |
+
|
| 499 |
+
1. **Token Efficiency**. Step 3.5 Flash achieves frontier-level agentic intelligence but currently relies on longer generation trajectories than Gemini 3.0 Pro to reach comparable quality.
|
| 500 |
+
2. **Efficient Universal Mastery**. We aim to unify generalist versatility with deep domain expertise. To achieve this efficiently, we are advancing variants of on-policy distillation, allowing the model to internalize expert behaviors with higher sample efficiency.
|
| 501 |
+
3. **RL for More Agentic Tasks**. While Step 3.5 Flash demonstrates competitive performance on academic agentic benchmarks, the next frontier of agentic AI necessitates the application of RL to intricate, expert-level tasks found in professional work, engineering, and research.
|
| 502 |
+
4. **Operational Scope and Constraints**. Step 3.5 Flash is tailored for coding and work-centric tasks, but may experience reduced stability during distribution shifts. This typically occurs in highly specialized domains or long-horizon, multi-turn dialogues, where the model may exhibit repetitive reasoning, mixed-language outputs, or inconsistencies in time and identity awareness.
|
| 503 |
+
|
| 504 |
+
## 9. Co-Developing the Future
|
| 505 |
+
|
| 506 |
+
We view our roadmap as a living document, evolving continuously based on real-world usage and developer feedback.
|
| 507 |
+
As we work to shape the future of AGI by expanding broad model capabilities, we want to ensure we are solving the right problems. We invite you to be part of this continuous feedback loop—your insights directly influence our priorities.
|
| 508 |
+
|
| 509 |
+
- **Join the Conversation**: Our Discord community is the primary hub for brainstorming future architectures, proposing capabilities, and getting early access updates 🚀
|
| 510 |
+
- **Report Friction**: Encountering limitations? You can open an issue on GitHub or flag it directly in our Discord support channels.
|
| 511 |
+
|
| 512 |
+
## License
|
| 513 |
+
This project is open-sourced under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
|
| 514 |
+
|
| 515 |
+
|
| 516 |
+
|
| 517 |
+
|
| 518 |
+
|
| 519 |
+
|
| 520 |
+
|
| 521 |
+
## 1. Introduction
|
| 522 |
+
|
| 523 |
+
**Step3.5** is our most capable open-source reasoning model, purpose-built for agentic workflows.
|
| 524 |
+
It bridges the gap between massive scale and high performance by combining 196B parameters of knowledge with the inference latency of an 11B model.
|
| 525 |
+
We prioritized developer needs to balance speed, cost, and accessibility. This enables the creation of production-grade agents that are fast, stable, and cost-effective.
|
| 526 |
+
|
| 527 |
+
## 2. Key Capabilities
|
| 528 |
+
|
| 529 |
+
- Frontier intelligence at 200 tokens/s: Step3.5 matches GPT-5 and Gemini 3.0 Pro in reasoning but runs 4x faster. By leveraging Multi-Token Prediction (MTP-3), Step3.5 predicts three tokens simultaneously, achieving 200 tokens/s for real-time responsiveness.
|
| 530 |
+
- Easy local deployment: Despite its massive 196B total parameter count, Step3.5's sparse MoE architecture allows it to run locally on high-end consumer hardware (e.g. Mac Studio M2/M3 Ultra). This enables secure, offline deployment of elite-level intelligence.
|
| 531 |
+
- Agentic & coding mastery: Step3.5 is fine-tuned for reliability. It achieves 85.5% on LiveCodeBench and 72.1% on SWE-bench Verified, making it a robust engine for autonomous software engineering and multi-step planning.
|
| 532 |
+
- Cost-effective long context: Optimized with a 3:1 sliding window attention strategy (512 window), Step3.5 handles extended contexts with minimal memory overhead, perfect for RAG applications and analyzing large codebases.
|
| 533 |
+
|
| 534 |
+
## 3. Benchmarks
|
| 535 |
+
|
| 536 |
+
## Architecture
|
| 537 |
+
|
| 538 |
+
### Key Features:
|
| 539 |
+
- Hybrid Attention Schedules and Compensation for SWA
|
| 540 |
+
|
| 541 |
+
- Mixture-of-Experts Routing And Load balancing
|
| 542 |
+
|
| 543 |
+
### Architecture Details
|
| 544 |
+
|
| 545 |
+
- Backbone: 45-layer Transformer
|
| 546 |
+
- Vocabulary: 128,896 tokens
|
| 547 |
+
- Hidden Dim: 4,096
|
| 548 |
+
- MoE Blocks:
|
| 549 |
+
- 288 routed experts + 1 shared expert per block
|
| 550 |
+
- Top-8 expert selection per token
|
| 551 |
+
- Parameters: Total:
|
| 552 |
+
196.81B (Backbone: 196B + MTP Head: 0.81B)
|
| 553 |
+
- Activated per token:
|
| 554 |
+
11B (excludes embedding/output projections)
|
| 555 |
+
- Special Components:
|
| 556 |
+
|
| 557 |
+
Multi-token Prediction (MTP) head with sliding-window attention and dense FFN
|
| 558 |
+
|
| 559 |
+
## 5. Getting started
|
| 560 |
+
|
| 561 |
+
## Deployment Resource Specifications
|
| 562 |
+
|
| 563 |
+
- Model Weights: 20 GB
|
| 564 |
+
- Runtime Overhead: ~4 GB
|
| 565 |
+
- Minimum VRAM Required: 24 GB (e.g., RTX 4090 or A100)
|
| 566 |
+
|
| 567 |
+
## Deploy Step3.5 Locally
|
| 568 |
+
|
| 569 |
+
For local deployment, Step3.5-preview supports inference frameworks including vLLM and SGLang. Comprehensive deployment instructions are available in the official [Github](#) repository.
|
| 570 |
+
|
| 571 |
+
vLLM and SGLang only support Step3.5-preview on their main branches. you can use their official docker images for inference.
|
| 572 |
+
|
| 573 |
+
### vLLM
|
| 574 |
+
|
| 575 |
+
Using Docker as:
|
| 576 |
+
|
| 577 |
+
```shell
|
| 578 |
+
docker pull vllm/vllm-openai:nightly
|
| 579 |
+
```
|
| 580 |
+
|
| 581 |
+
or using pip (must use pypi.org as the index url):
|
| 582 |
+
|
| 583 |
+
```shell
|
| 584 |
+
pip install -U vllm --pre --index-url https://pypi.org/simple --extra-index-url https://wheels.vllm.ai/nightly
|
| 585 |
+
```
|
| 586 |
+
|
| 587 |
+
### SGLang
|
| 588 |
+
|
| 589 |
+
Using Docker as:
|
| 590 |
+
|
| 591 |
+
```shell
|
| 592 |
+
docker pull lmsysorg/sglang:dev
|
| 593 |
+
```
|
| 594 |
+
|
| 595 |
+
or using pip install sglang from source.
|
| 596 |
+
|
| 597 |
+
### transformers
|
| 598 |
+
|
| 599 |
+
```python
|
| 600 |
+
import torch
|
| 601 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 602 |
+
|
| 603 |
+
MODEL_PATH = "xxxxxx"
|
| 604 |
+
messages = [{"role": "user", "content": "hello"}]
|
| 605 |
+
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
| 606 |
+
inputs = tokenizer.apply_chat_template(
|
| 607 |
+
messages,
|
| 608 |
+
tokenize=True,
|
| 609 |
+
add_generation_prompt=True,
|
| 610 |
+
return_dict=True,
|
| 611 |
+
return_tensors="pt",
|
| 612 |
+
)
|
| 613 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 614 |
+
pretrained_model_name_or_path=MODEL_PATH,
|
| 615 |
+
torch_dtype=torch.bfloat16,
|
| 616 |
+
device_map="auto",
|
| 617 |
+
)
|
| 618 |
+
inputs = inputs.to(model.device)
|
| 619 |
+
generated_ids = model.generate(**inputs, max_new_tokens=128, do_sample=False)
|
| 620 |
+
output_text = tokenizer.decode(generated_ids[0][inputs.input_ids.shape[1] :])
|
| 621 |
+
print(output_text)
|
| 622 |
+
```
|
| 623 |
+
|
| 624 |
+
### vLLM
|
| 625 |
+
|
| 626 |
+
```shell
|
| 627 |
+
vllm serve {xxx} \
|
| 628 |
+
--tensor-parallel-size 4 \
|
| 629 |
+
--speculative-config.method mtp \
|
| 630 |
+
--speculative-config.num_speculative_tokens 1 \
|
| 631 |
+
--tool-call-parser {xxx} \
|
| 632 |
+
--reasoning-parser {xxx} \
|
| 633 |
+
--enable-auto-tool-choice \
|
| 634 |
+
--served-model-name {xxx}
|
| 635 |
+
```
|
| 636 |
+
|
| 637 |
+
### SGLang
|
| 638 |
+
|
| 639 |
+
```shell
|
| 640 |
+
python3 -m sglang.launch_server \
|
| 641 |
+
--model-path {xxx} \
|
| 642 |
+
--tp-size 8 \
|
| 643 |
+
--tool-call-parser {xxx} \
|
| 644 |
+
--reasoning-parser {xxx} \
|
| 645 |
+
--speculative-algorithm EAGLE \
|
| 646 |
+
--speculative-num-steps 3 \
|
| 647 |
+
--speculative-eagle-topk 1 \
|
| 648 |
+
--speculative-num-draft-tokens 4 \
|
| 649 |
+
--mem-fraction-static 0.8 \
|
| 650 |
+
--served-model-name {xxx} \
|
| 651 |
+
--host 0.0.0.0 \
|
| 652 |
+
--port 8000
|
| 653 |
+
```
|
| 654 |
+
|
| 655 |
+
### Parameter Instructions
|
| 656 |
+
|
| 657 |
+
- When using `vLLM` and `SGLang`, thinking mode is enabled by default when sending requests.
|
| 658 |
+
- Both support tool calling. Please use OpenAI-style tool description format for calls.
|
| 659 |
+
|
| 660 |
+
## Citation
|
| 661 |
+
|
| 662 |
+
If you find our work useful in your research, please consider citing the following paper:
|
| 663 |
+
|
| 664 |
+
```bibtex
|
| 665 |
+
@misc{xxxx,
|
| 666 |
+
title={Step3.5-preview},
|
| 667 |
+
author={StepFun Team},
|
| 668 |
+
year={2026},
|
| 669 |
+
eprint={xxxx},
|
| 670 |
+
archivePrefix={arXiv},
|
| 671 |
+
primaryClass={cs.CL},
|
| 672 |
+
url={https://arxiv.org/abs/xxxxx},
|
| 673 |
+
}
|
| 674 |
+
```
|
| 675 |
+
|
| 676 |
+
## 📄 License
|
| 677 |
+
|
| 678 |
+
This project is open-sourced under the [Apache 2.0 License](https://www.google.com/search?q=LICENSE).
|