Update README.md
Browse files
README.md
CHANGED
|
@@ -23,8 +23,11 @@ library_name: transformers
|
|
| 23 |
[]()
|
| 24 |
[](https://openrouter.ai/chat?models=stepfun/step-3.5-flash:free)
|
| 25 |
|
|
|
|
| 26 |
</div>
|
| 27 |
|
|
|
|
|
|
|
| 28 |
## 1. Introduction
|
| 29 |
|
| 30 |
**Step 3.5 Flash** ([visit website](https://static.stepfun.com/blog/step-3.5-flash/)) is our most capable open-source foundation model, engineered to deliver frontier reasoning and agentic capabilities with exceptional efficiency. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token. This "intelligence density" allows it to rival the reasoning depth of top-tier proprietary models, while maintaining the agility required for real-time interaction.
|
|
|
|
| 23 |
[]()
|
| 24 |
[](https://openrouter.ai/chat?models=stepfun/step-3.5-flash:free)
|
| 25 |
|
| 26 |
+
**Quick chat in [Huggingface Space](https://huggingface.co/spaces/stepfun-ai/Step-3.5-Flash)**
|
| 27 |
</div>
|
| 28 |
|
| 29 |
+
|
| 30 |
+
|
| 31 |
## 1. Introduction
|
| 32 |
|
| 33 |
**Step 3.5 Flash** ([visit website](https://static.stepfun.com/blog/step-3.5-flash/)) is our most capable open-source foundation model, engineered to deliver frontier reasoning and agentic capabilities with exceptional efficiency. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token. This "intelligence density" allows it to rival the reasoning depth of top-tier proprietary models, while maintaining the agility required for real-time interaction.
|