Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,7 @@ pipeline_tag: text-generation
|
|
| 19 |
# Note: If you have a multi-GPU SM120 Blackwell system (RTX 50/Pro), try my vLLM fork to resolve P2P / TP=2 issues (Pending PR into upstream).
|
| 20 |
https://github.com/Gadflyii/vllm/tree/main
|
| 21 |
|
| 22 |
-
# GLM-4.7-Flash-MTP-NVFP4 (Mixed Precision with MTP
|
| 23 |
|
| 24 |
This is a **mixed precision NVFP4 quantization** of [zai-org/GLM-4.7-Flash](https://huggingface.co/zai-org/GLM-4.7-Flash), a 30B-A3B (30B total, 3B active) Mixture-of-Experts model. This version preserves **MTP (Multi-Token Prediction) layers in BF16** for speculative decoding compatibility.
|
| 25 |
|
|
@@ -27,8 +27,7 @@ This is a **mixed precision NVFP4 quantization** of [zai-org/GLM-4.7-Flash](http
|
|
| 27 |
|
| 28 |
| Feature | GLM-4.7-Flash-NVFP4 | **This Model** |
|
| 29 |
|---------|---------------------|----------------|
|
| 30 |
-
| MTP Layers |
|
| 31 |
-
| MTP Speculative Decoding | ❌ Not supported | ✅ Supported |
|
| 32 |
| Calibration Samples | 128 | **512** |
|
| 33 |
| Calibration Seq Length | 2048 | **4096** |
|
| 34 |
| MMLU-Pro Accuracy | 23.56% | **23.91%** |
|
|
@@ -47,13 +46,12 @@ This model uses **mixed precision** to preserve accuracy and MTP functionality:
|
|
| 47 |
|
| 48 |
## Performance
|
| 49 |
|
| 50 |
-
| Metric | BF16 | NVFP4
|
| 51 |
|--------|------|----------|----------------|
|
| 52 |
| MMLU-Pro | 24.83% | 23.56% | **23.91%** |
|
| 53 |
| Size | 62.4 GB | 20.4 GB | **20.9 GB** |
|
| 54 |
| Compression | 1x | 3.1x | **3.0x** |
|
| 55 |
| Accuracy Loss | - | -1.27% | **-0.92%** |
|
| 56 |
-
| MTP Working | ✅ | ❌ | ✅ |
|
| 57 |
|
| 58 |
### MTP Acceptance Rate
|
| 59 |
|
|
@@ -68,12 +66,12 @@ MTP quality is preserved (actually slightly improved) after quantization.
|
|
| 68 |
|
| 69 |
MTP speculative decoding currently shows overhead rather than speedup due to missing `torch.compile` support for the MTP drafter model in vLLM. For best throughput, run without MTP enabled until this is resolved upstream.
|
| 70 |
|
| 71 |
-
| Configuration | Tokens/sec |
|
| 72 |
-
|
| 73 |
-
| Without MTP | 78.1 tok/s |
|
| 74 |
-
| With MTP (1 token) | 64.7 tok/s |
|
| 75 |
-
| With MTP (2 tokens) | 56.8 tok/s |
|
| 76 |
-
| With MTP (4 tokens) | 44.5 tok/s |
|
| 77 |
|
| 78 |
## Usage
|
| 79 |
|
|
|
|
| 19 |
# Note: If you have a multi-GPU SM120 Blackwell system (RTX 50/Pro), try my vLLM fork to resolve P2P / TP=2 issues (Pending PR into upstream).
|
| 20 |
https://github.com/Gadflyii/vllm/tree/main
|
| 21 |
|
| 22 |
+
# GLM-4.7-Flash-MTP-NVFP4 (Mixed Precision with MTP in BF16)
|
| 23 |
|
| 24 |
This is a **mixed precision NVFP4 quantization** of [zai-org/GLM-4.7-Flash](https://huggingface.co/zai-org/GLM-4.7-Flash), a 30B-A3B (30B total, 3B active) Mixture-of-Experts model. This version preserves **MTP (Multi-Token Prediction) layers in BF16** for speculative decoding compatibility.
|
| 25 |
|
|
|
|
| 27 |
|
| 28 |
| Feature | GLM-4.7-Flash-NVFP4 | **This Model** |
|
| 29 |
|---------|---------------------|----------------|
|
| 30 |
+
| MTP Layers | NVFP4 | BF16 |
|
|
|
|
| 31 |
| Calibration Samples | 128 | **512** |
|
| 32 |
| Calibration Seq Length | 2048 | **4096** |
|
| 33 |
| MMLU-Pro Accuracy | 23.56% | **23.91%** |
|
|
|
|
| 46 |
|
| 47 |
## Performance
|
| 48 |
|
| 49 |
+
| Metric | BF16 | NVFP4 | **This Model** |
|
| 50 |
|--------|------|----------|----------------|
|
| 51 |
| MMLU-Pro | 24.83% | 23.56% | **23.91%** |
|
| 52 |
| Size | 62.4 GB | 20.4 GB | **20.9 GB** |
|
| 53 |
| Compression | 1x | 3.1x | **3.0x** |
|
| 54 |
| Accuracy Loss | - | -1.27% | **-0.92%** |
|
|
|
|
| 55 |
|
| 56 |
### MTP Acceptance Rate
|
| 57 |
|
|
|
|
| 66 |
|
| 67 |
MTP speculative decoding currently shows overhead rather than speedup due to missing `torch.compile` support for the MTP drafter model in vLLM. For best throughput, run without MTP enabled until this is resolved upstream.
|
| 68 |
|
| 69 |
+
| Configuration | Tokens/sec |
|
| 70 |
+
|---------------|------------|
|
| 71 |
+
| Without MTP | 78.1 tok/s |
|
| 72 |
+
| With MTP (1 token) | 64.7 tok/s |
|
| 73 |
+
| With MTP (2 tokens) | 56.8 tok/s |
|
| 74 |
+
| With MTP (4 tokens) | 44.5 tok/s |
|
| 75 |
|
| 76 |
## Usage
|
| 77 |
|