vibestudio commited on
Commit
7fae2e8
·
verified ·
1 Parent(s): 4aa5f33

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +284 -0
README.md ADDED
@@ -0,0 +1,284 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ![Screenshot](https://huggingface.co/VibeStudio/MiniMax-M2-THRIFT/resolve/main/vibe_processed_by_imagy.png)
3
+
4
+ # VibeStudio/MiniMax-M2-THRIFT-55-v1
5
+
6
+ **Targeted Reduction for Inference and Fine-Tuning — ~55% Expert Pruned**
7
+
8
+ A lean, efficiency-first variant of MiniMax-M2 designed to maximize **latency, throughput, and VRAM savings** for local, on-prem, and edge deployments.
9
+
10
+ ## TLDR
11
+
12
+ * **What:** ~55% expert-pruned MoE with staged pruning + knowledge distillation.
13
+ * **Why:** Push the efficiency frontier for compact, responsive deployments.
14
+ * **Now:** Ready for experimentation with solid coverage across core evals and more on the way.
15
+
16
+ ---
17
+
18
+ ## Why it’s useful
19
+
20
+ * **Lower latency:** Fast, responsive interactions for interactive apps and tools.
21
+ * **Smaller memory footprint:** Fits tighter VRAM budgets and increases node density.
22
+ * **Higher throughput:** Serve more concurrent users on the same hardware.
23
+ * **Deployment-friendly:** Smooth drop-in via SGLang with OpenAI-compatible API.
24
+ * **Adaptable:** Plays well with light fine-tuning to match domain and style.
25
+
26
+ ## Intended use
27
+
28
+ * Local/air-gapped assistants and dev tools
29
+ * Cost-sensitive batches and realtime services
30
+ * Edge and on-prem deployments prioritizing efficiency
31
+
32
+ ---
33
+
34
+ ## How Our Approach Works (Short)
35
+
36
+ > **Active research in progress** — we continue to iterate and expand ablations.
37
+
38
+ * **Teacher–student setup:** Start with **MiniMax-M2** as teacher and a copy as student.
39
+ * **Gradual expert pruning:** Remove **≈5% experts per stage** over **~11 stages** (≈**55% total**), guided by importance scores with a lightweight **Leave-One-Expert-Out** check to retain rare-but-important experts.
40
+ * **Distill after each prune:** Retrain the student to imitate the teacher on
41
+
42
+ * **Outputs** (token probability distributions),
43
+ * **Hidden states**, and
44
+ * **Router behavior** over the **surviving experts**.
45
+
46
+ ---
47
+
48
+ # Model Report — THRIFT-55-v1
49
+
50
+ **Evaluation windows:** Nov 7–9, 2025 & Nov 24, 2025
51
+ **Last updated:** Nov 25, 2025
52
+
53
+ ## 📊 Results to date
54
+
55
+ ### 1) Multiple Choice Q&A (lm-eval)
56
+
57
+ **MMLU (overall and bands)**
58
+
59
+ | Metric | Score |
60
+ | :----------- | -----: |
61
+ | MMLU Overall | 60.45% |
62
+ | Humanities | 51.65% |
63
+ | STEM | 59.44% |
64
+ | Social Sci. | 71.66% |
65
+ | Other | 63.69% |
66
+
67
+ **Selected Tasks**
68
+
69
+ | Task | Score |
70
+ | :----------------------- | -----: |
71
+ | arc_challenge (acc_norm) | 50.77% |
72
+ | arc_easy | 74.07% |
73
+ | boolq | 75.02% |
74
+ | hellaswag (acc_norm) | 64.99% |
75
+ | mmlu | 60.45% |
76
+ | openbookqa (acc_norm) | 38.20% |
77
+ | rte | 68.23% |
78
+ | winogrande | 64.64% |
79
+
80
+ ### 2) Code Generation (EvalPlus)
81
+
82
+ **MBPP (Python, 378 problems)**
83
+
84
+ | Metric | Score |
85
+ | :------ | ----: |
86
+ | MBPP | 42.1% |
87
+ | MBPP+ | 37.3% |
88
+ | Average | 39.7% |
89
+
90
+ **HumanEval (164 problems)**
91
+
92
+ | Metric | Score |
93
+ | :--------- | ----: |
94
+ | HumanEval | 40.2% |
95
+ | HumanEval+ | 39.6% |
96
+ | Average | 39.9% |
97
+
98
+ ### 3) LiveCodeBench (Live Coding)
99
+
100
+ | Metric | Value |
101
+ | :------- | ---------: |
102
+ | pass@1 | 16.48% |
103
+ | Problems | 182 |
104
+ | Status | ✅ Complete |
105
+
106
+ ### 4) Coming next
107
+
108
+ * **GSM8K** and **MATH-500** (math suite)
109
+ * **WildBench** and **SWE-Bench** (knowledge & software tasks)
110
+
111
+ ---
112
+
113
+ ## SGLang Deployment (Python)
114
+
115
+ > Use a fresh virtual environment (e.g., `venv`, `conda`, or `uv`).
116
+
117
+ ```bash
118
+ git clone -b v0.5.4.post1 https://github.com/sgl-project/sglang.git
119
+ cd sglang
120
+
121
+ pip install --upgrade pip
122
+ pip install -e "python"
123
+ ```
124
+
125
+ **4-GPU launch**
126
+
127
+ ```bash
128
+ python -m sglang.launch_server \
129
+ --model-path VibeStudio/MiniMax-M2-THRIFT-55-v1 \
130
+ --tp-size 4 \
131
+ --tool-call-parser minimax-m2 \
132
+ --reasoning-parser minimax-append-think \
133
+ --host 0.0.0.0 \
134
+ --trust-remote-code \
135
+ --port 8000 \
136
+ --mem-fraction-static 0.85
137
+ ```
138
+
139
+ **8-GPU launch**
140
+
141
+ ```bash
142
+ python -m sglang.launch_server \
143
+ --model-path VibeStudio/MiniMax-M2-THRIFT-55-v1 \
144
+ --tp-size 8 \
145
+ --ep-size 8 \
146
+ --tool-call-parser minimax-m2 \
147
+ --reasoning-parser minimax-append-think \
148
+ --host 0.0.0.0 \
149
+ --trust-remote-code \
150
+ --port 8000 \
151
+ --mem-fraction-static 0.85
152
+ ```
153
+
154
+ ### Quick Test (OpenAI-compatible)
155
+
156
+ ```bash
157
+ curl http://localhost:8000/v1/chat/completions \
158
+ -H "Content-Type: application/json" \
159
+ -d '{
160
+ "model": "VibeStudio/MiniMax-M2-THRIFT-55-v1",
161
+ "messages": [
162
+ {"role":"system","content":[{"type":"text","text":"You are a helpful assistant."}]},
163
+ {"role":"user","content":[{"type":"text","text":"Write a Python function to reverse a linked list."}]}
164
+ ]
165
+ }'
166
+ ```
167
+
168
+ > Swap `localhost:8000` with `151.185.44.34:8000` to hit the test box.
169
+
170
+ ---
171
+
172
+ ## Benchmarks
173
+
174
+ This README reflects **MMLU**, **MBPP**, **HumanEval**, and **LiveCodeBench** results completed by **Nov 25, 2025**. Additional benchmarks will appear here as they finish.
175
+
176
+ ## Research paper
177
+
178
+ Coming soon.
179
+
180
+ ---
181
+
182
+ ## License
183
+
184
+ Derived from MiniMax-M2 and distributed under the **MIT License**
185
+ [http://github.com/MiniMax-AI/MiniMax-M2/blob/main/LICENSE](http://github.com/MiniMax-AI/MiniMax-M2/blob/main/LICENSE)
186
+
187
+ ---
188
+
189
+ ## Credits
190
+
191
+ Model conversion and Transformers glue by **@Qubitum** at **ModelCloud**.
192
+
193
+ ## References (BibTeX)
194
+
195
+ ```
196
+ @article{cai2025thinking,
197
+ title = {Thinking with DistilQwen: A Tale of Four Distilled Reasoning and Reward Model Series},
198
+ author = {Cai, Wenrui and Wang, Chengyu and Yan, Junbing and Huang, Jun and Fang, Xiangzhong},
199
+ journal = {arXiv preprint arXiv:2511.01354},
200
+ year = {2025},
201
+ eprinttype = {arXiv},
202
+ eprint = {2511.01354},
203
+ primaryclass = {cs.CL}
204
+ }
205
+
206
+ @misc{lasby-reap,
207
+ title = {{REAP the Experts: Why Pruning Prevails for One-Shot MoE compression}},
208
+ author = {Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan},
209
+ year = {2025},
210
+ publisher = {arXiv},
211
+ note = {arXiv:2510.13999v1 [cs]},
212
+ url = {https://arxiv.org/abs/2510.13999v1}
213
+ }
214
+
215
+ @article{yang2025wanda++,
216
+ title = {Wanda++: Pruning Large Language Models via Regional Gradients},
217
+ author = {Yang, Yifan and Zhen, Kai and Ganesh, Bhavana and Galstyan, Aram and Huybrechts, Goeric and Müller, Markus and Kübler, Jonas M. and Swaminathan, Rupak Vignesh and Mouchtaris, Athanasios and Bodapati, Sravan Babu and Susanj, Nathan and Zhang, Zheng and FitzGerald, Jack and Kumar, Abhishek},
218
+ journal = {arXiv preprint arXiv:2503.04992},
219
+ year = {2025},
220
+ eprinttype = {arXiv},
221
+ eprint = {2503.04992},
222
+ primaryclass = {cs.CL}
223
+ }
224
+
225
+ @article{li2025tyr,
226
+ title = {Týr-the-Pruner: Structural Pruning LLMs via Global Sparsity Distribution Optimization},
227
+ author = {Li, G. and Xu, Yixing and Li, Zeping and Liu, Ji and Yin, Xuanwu and Li, Dong and Barsoum, Emad},
228
+ journal = {arXiv preprint arXiv:2503.09657},
229
+ year = {2025},
230
+ eprinttype = {arXiv},
231
+ eprint = {2503.09657},
232
+ primaryclass = {cs.CL}
233
+ }
234
+
235
+ @article{xia2023sheared,
236
+ title = {Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning},
237
+ author = {Xia, Mengzhou and Gao, Tianyu and Zeng, Zhiyuan and Chen, Danqi},
238
+ journal = {arXiv preprint arXiv:2310.06694},
239
+ year = {2023},
240
+ eprinttype = {arXiv},
241
+ eprint = {2310.06694},
242
+ primaryclass = {cs.CL}
243
+ }
244
+
245
+ @article{ma2023llmpruner,
246
+ title = {LLM-Pruner: On the Structural Pruning of Large Language Models},
247
+ author = {Ma, Xinyin and Fang, Gongfan and Wang, Xinchao},
248
+ journal = {arXiv preprint arXiv:2305.11627},
249
+ year = {2023},
250
+ eprinttype = {arXiv},
251
+ eprint = {2305.11627},
252
+ primaryclass = {cs.CL}
253
+ }
254
+
255
+ @article{yang2023wanda,
256
+ title = {Wanda: Pruning by Weights and Activation-based Discriminant Analysis},
257
+ author = {Yang, Yifan and Ganesh, Bhavana and Galstyan, Aram and Huybrechts, Goeric and Müller, Markus and Kübler, Jonas M. and Swaminathan, Rupak Vignesh and Mouchtaris, Athanasios and Bodapati, Sravan Babu and Susanj, Nathan and Zhang, Zheng and FitzGerald, Jack and Kumar, Abhishek},
258
+ journal = {arXiv preprint arXiv:2306.11695},
259
+ year = {2023},
260
+ eprinttype = {arXiv},
261
+ eprint = {2306.11695},
262
+ primaryclass = {cs.CL}
263
+ }
264
+
265
+ @article{frantar2023sparsegpt,
266
+ title = {SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot},
267
+ author = {Frantar, Elias and Alistarh, Dan},
268
+ journal = {arXiv preprint arXiv:2301.00774},
269
+ year = {2023},
270
+ eprinttype = {arXiv},
271
+ eprint = {2301.00774},
272
+ primaryclass = {cs.CL}
273
+ }
274
+
275
+ @article{dettmers2023qlora,
276
+ title = {QLoRA: Efficient Finetuning of Quantized LLMs},
277
+ author = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
278
+ journal = {arXiv preprint arXiv:2307.02973},
279
+ year = {2023},
280
+ eprinttype = {arXiv},
281
+ eprint = {2307.02973},
282
+ primaryclass = {cs.CL}
283
+ }
284
+ ```