Update README.md
Browse files
README.md
CHANGED
|
@@ -27,7 +27,7 @@ license_link: https://github.com/MiniMax-AI/MiniMax-M2.5/blob/main/LICENSE
|
|
| 27 |
|
| 28 |
# Model Quantization
|
| 29 |
|
| 30 |
-
The model was quantized from [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5)
|
| 31 |
|
| 32 |
|
| 33 |
**Quantization scripts:**
|
|
|
|
| 27 |
|
| 28 |
# Model Quantization
|
| 29 |
|
| 30 |
+
The model was quantized from [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) which was converted to bf16 using [QuixiAI/MiniMax-M2.1-bf16/minimax_to_bf16.py](https://huggingface.co/QuixiAI/MiniMax-M2.1-bf16/blob/main/minimax_to_bf16.py) using [AMD-Quark](https://quark.docs.amd.com/latest/index.html). The weights are quantized to MXFP4 and activations are quantized to MXFP4.
|
| 31 |
|
| 32 |
|
| 33 |
**Quantization scripts:**
|