shieldstackllc commited on
Commit
7f3c9f9
·
verified ·
1 Parent(s): fa88915

Add vMLX model card

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - mlx
8
+ - mixture-of-experts
9
+ - moe
10
+ - pruning
11
+ - reap
12
+ - step3p5
13
+ - mixed-quantization
14
+ - apple-silicon
15
+ library_name: mlx
16
+ base_model: lkevincc0/Step-3.5-Flash-REAP-128B-A11B
17
+ ---
18
+
19
+ <p align="center">
20
+ <a href="https://vmlx.net">
21
+ <img src="vmlx-logo.png" alt="vMLX" width="120">
22
+ </a>
23
+ </p>
24
+
25
+ # Step-3.5-Flash REAP 128B-A11B — MLX Mixed 4/6-bit
26
+
27
+ MLX mixed-precision quantized version of [lkevincc0/Step-3.5-Flash-REAP-128B-A11B](https://huggingface.co/lkevincc0/Step-3.5-Flash-REAP-128B-A11B) for efficient local inference on Apple Silicon.
28
+
29
+ - **Quantization**: Mixed 4/6-bit — v_proj and down_proj at 6-bit, all other weights at 4-bit (group size 64, affine mode)
30
+ - **Architecture**: Step-3.5 SMoE — 45 layers, 173 routed experts (REAP-pruned), 8 active per token, shared expert
31
+ - **Parameters**: 128B total, 11B active per token
32
+ - **Context**: 262K tokens
33
+ - **Size**: ~68 GB
34
+ - **Pruning**: ~40% of experts removed via [REAP](https://github.com/CerebrasResearch/reap) (Router Expert Activation Pruning)
35
+
36
+ ## Usage
37
+
38
+ ```python
39
+ from mlx_lm import load, generate
40
+
41
+ model, tokenizer = load("shieldstackllc/Step-3.5-Flash-REAP-128B-A11B-mlx-mixed-4-6")
42
+ response = generate(model, tokenizer, prompt="Hello!", verbose=True)
43
+ ```
44
+
45
+ Or with [vMLX](https://vmlx.net) for native macOS inference.
46
+
47
+ ## About
48
+
49
+ Step-3.5-Flash is a large Mixture-of-Experts language model by [StepFun AI](https://stepfun.com). This variant was pruned by [lkevincc0](https://huggingface.co/lkevincc0) using REAP (Router Expert Activation Pruning), reducing the expert count from the original to 173 while maintaining strong performance. The mixed-precision MLX quantization preserves higher fidelity on critical attention and feed-forward projections by using 6-bit for v_proj and down_proj layers.
50
+
51
+ ## Made for vMLX
52
+
53
+ This model was converted and optimized for [vMLX](https://vmlx.net) — a free, open source macOS native MLX inference engine for Apple Silicon. Download vMLX to run this model locally with zero configuration.
54
+
55
+ ## Credits
56
+
57
+ - **Base model**: [stepfun-ai/Step-3.5-Flash](https://huggingface.co/stepfun-ai/Step-3.5-Flash) by StepFun AI
58
+ - **REAP pruning**: [lkevincc0/Step-3.5-Flash-REAP-128B-A11B](https://huggingface.co/lkevincc0/Step-3.5-Flash-REAP-128B-A11B) by lkevincc0
59
+ - **MLX conversion**: [vMLX](https://vmlx.net) — Run AI locally on Mac. No compromises.
60
+
61
+ ## Contact
62
+
63
+ For questions, issues, or collaboration: **admin@vmlx.net**