Ex0bit commited on
Commit
60a40fd
·
verified ·
1 Parent(s): b6ed9aa

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -0
README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: prism-research
4
+ license_link: LICENSE.md
5
+ language:
6
+ - en
7
+ - zh
8
+ tags:
9
+ - stepfun
10
+ - prism
11
+ - moe
12
+ - reasoning
13
+ - coding
14
+ - agentic
15
+ - abliterated
16
+ pipeline_tag: text-generation
17
+ library_name: transformers
18
+ base_model:
19
+ - stepfun-ai/Step-3.5-Flash
20
+ ---
21
+
22
+ [![Parameters](https://img.shields.io/badge/Parameters-196B_(11B_Active)-blue)]()
23
+ [![Architecture](https://img.shields.io/badge/Architecture-MoE-green)]()
24
+ [![Context](https://img.shields.io/badge/Context-256K-orange)]()
25
+ [![MTP](https://img.shields.io/badge/MTP--3-350_tok%2Fs_Peak-purple)]()
26
+
27
+
28
+
29
+ <p align="center">
30
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/63adf1fa42fd3b8dbaeb0c92/ZBA5B381EC5oOmnAV7TPC.png" width="400"/>
31
+ </p>
32
+
33
+ # Step-3.5-Flash-PRISM
34
+
35
+ An role-play following unrestricted/unchained PRISM-LITE version of [StepFun's Step 3.5 Flash](https://huggingface.co/stepfun-ai/Step-3.5-Flash) intended particularly for over-refusal and propaganda mechanisms suppression using our SOTA PRISM pipeline.
36
+
37
+ For Custom full Production PRISM version / tensors reach out.
38
+ <div align="center">
39
+
40
+ ### ☕ Support Our Work
41
+
42
+ If you enjoy our work and find it useful, please consider sponsoring or supporting us!
43
+
44
+ [![Ko-fi](https://img.shields.io/badge/Ko--fi-Support%20Us-ff5e5b?logo=ko-fi&logoColor=white)](https://ko-fi.com/ericelbaz)
45
+
46
+ | Option | Description |
47
+ |--------|-------------|
48
+ | [**PRISM VIP Membership**](https://ko-fi.com/summary/6bae206c-a751-4868-8dc7-f531afd1fb4c) | Access to all PRISM models |
49
+
50
+ </div>
51
+
52
+ ---
53
+
54
+ ## Model Highlights
55
+
56
+ - **PRISM Ablation** — State-of-the-art technique that removes over-refusal behaviors while preserving model capabilities
57
+ - **196B MoE Architecture** — 196 billion total parameters with only 11 billion active per token across 288 fine-grained routed experts + 1 shared expert
58
+ - **Multi-Token Prediction (MTP-3)** — Predicts 4 tokens simultaneously, achieving 100–300 tok/s typical throughput (peaking at 350 tok/s)
59
+ - **256K Context Window** — Cost-efficient long context via 3:1 Sliding Window Attention (SWA) ratio
60
+ - **Frontier Reasoning & Coding** — 97.3 on AIME 2025, 74.4% on SWE-bench Verified, 51.0% on Terminal-Bench 2.0
61
+ - **Accessible Local Deployment** — Runs on high-end consumer hardware (Mac Studio M4 Max, NVIDIA DGX Spark)
62
+
63
+ ## Model Architecture
64
+
65
+ | Specification | Value |
66
+ |---------------|-------|
67
+ | Architecture | Sparse Mixture-of-Experts (MoE) |
68
+ | Backbone | 45-layer Transformer (4,096 hidden dim) |
69
+ | Total Parameters | 196.81B (196B Backbone + 0.81B Head) |
70
+ | Activated Parameters | ~11B (per token) |
71
+ | Routed Experts per Layer | 288 |
72
+ | Shared Experts | 1 (always active) |
73
+ | Selected Experts per Token | Top-8 |
74
+ | Vocabulary Size | 128,896 |
75
+ | Context Length | 256K |
76
+ | Attention | Hybrid SWA (3:1 SWA-to-Full ratio) |
77
+ | MTP Head | Sliding-window attention + dense FFN (4 tokens/pass) |
78
+
79
+ ## Benchmarks
80
+
81
+ | Benchmark | Step 3.5 Flash | DeepSeek V3.2 | Kimi K2.5 | GLM-4.7 | MiniMax M2.1 |
82
+ |-----------|---------------|---------------|-----------|---------|--------------|
83
+ | **Agent** | | | | | |
84
+ | τ²-Bench | 88.2 | 80.3 | 85.4 | 87.4 | 86.6 |
85
+ | BrowseComp | 51.6 | 51.4 | 60.6 | 52.0 | 47.4 |
86
+ | GAIA (no file) | 84.5 | 75.1 | 75.9 | 61.9 | 64.3 |
87
+ | xbench-DeepSearch (2025.05) | 83.7 | 78.0 | 76.7 | 72.0 | 68.7 |
88
+ | **Reasoning** | | | | | |
89
+ | AIME 2025 | 97.3 | 93.1 | 96.1 | 95.7 | 83.0 |
90
+ | HMMT 2025 (Feb.) | 98.4 | 92.5 | 95.4 | 97.1 | 71.0 |
91
+ | IMOAnswerBench | 85.4 | 78.3 | 81.8 | 82.0 | 60.4 |
92
+ | **Coding** | | | | | |
93
+ | LiveCodeBench-V6 | 86.4 | 83.3 | 85.0 | 84.9 | — |
94
+ | SWE-bench Verified | 74.4 | 73.1 | 76.8 | 73.8 | 74.0 |
95
+ | Terminal-Bench 2.0 | 51.0 | 46.4 | 50.8 | 41.0 | 47.9 |
96
+
97
+
98
+ ### llama.cpp (GGUF)
99
+
100
+ For local deployment (requires ~120 GB VRAM for int4, smaller quants are available):
101
+
102
+ ```bash
103
+ ./llama-cli -m step3.5_flash_prism_Q4_K_S.gguf --jinja
104
+ ```
105
+
106
+ ## Recommended Parameters
107
+
108
+ | Use Case | Temperature | Top-P | Max New Tokens |
109
+ |----------|-------------|-------|----------------|
110
+ | Reasoning / Coding | 1.0 | 0.95 | 32768 |
111
+ | General Chat | 0.6 | 0.95 | 4096 |
112
+
113
+ ## Hardware Requirements
114
+
115
+ | Setup | Details |
116
+ |-------|---------|
117
+ | **BF16 (Full)** | 8x H100/A100 80GB with tensor parallelism |
118
+ | **FP8 Quantized** | 8x A100 80GB with expert parallelism |
119
+ | **GGUF INT4 (Local)** | ~120 GB unified memory (Mac Studio M4 Max 128GB, DGX Spark, AMD Ryzen AI Max+ 395) |
120
+
121
+ ## License
122
+
123
+ This model is released under the [PRISM Research License](LICENSE.md).
124
+
125
+ ## Acknowledgments
126
+
127
+ Based on [Step 3.5 Flash](https://huggingface.co/stepfun-ai/Step-3.5-Flash) by [StepFun AI](https://www.stepfun.com). See the [technical report](https://github.com/stepfun-ai/Step-3.5-Flash/blob/main/step_3p5_flash_tech_report.pdf) and [blog post](https://static.stepfun.com/blog/step-3.5-flash/) for more details on the base model.