Update README.md
Browse files
README.md
CHANGED
|
@@ -2,4 +2,21 @@
|
|
| 2 |
base_model:
|
| 3 |
- MiniMaxAI/MiniMax-M2.7
|
| 4 |
---
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
base_model:
|
| 3 |
- MiniMaxAI/MiniMax-M2.7
|
| 4 |
---
|
| 5 |
+
## Notes
|
| 6 |
+
- 04-12-2026: The Q4_K_M I uploaded seems to have some issues, the PPL / KLD was throwing `nan` so I'll remove the model for now and try to get a working quant up tomorrow.
|
| 7 |
+
|
| 8 |
+
## Description
|
| 9 |
+
This repo contains specialized MoE-quants for MiniMax-M2.7. The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model,
|
| 10 |
+
it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization.
|
| 11 |
+
To that end, the quantization type default is kept in high quality and the FFN UP + FFN GATE tensors are quanted down along with the FFN DOWN tensors.
|
| 12 |
+
|
| 13 |
+
| Quant | Size | Mixture | PPL | 1-(Mean PPL(Q)/PPL(base)) | KLD |
|
| 14 |
+
| :--------- | :--------- | :------- | :------- | :------- | :------- |
|
| 15 |
+
| Q8_0 | 226.43 GiB (8.51 BPW) | Q8_0 | 7.880138 ± 0.060034 | +0.2412% | 0.029715 ± 0.000649 |
|
| 16 |
+
| Q5_K_M | 157.23 GiB (5.91 BPW) | Q8_0 / Q5_K / Q5_K / Q6_K | 7.871878 ± 0.059897 | +0.1361% | 0.038926 ± 0.000692 |
|
| 17 |
+
| IQ4_XS | 101.10 GiB (3.80 BPW) | Q8_0 / IQ3_S / IQ3_S / IQ4_XS | 8.290674 ± 0.063543 | +5.4635% | 0.128807 ± 0.001070 |
|
| 18 |
+
| IQ3_S | 77.86 GiB (2.92 BPW) | Q8_0 / IQ2_S / IQ2_S / IQ3_S | 8.815764 ± 0.067859 | +12.1430% | 0.282740 ± 0.001687 |
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+

|