Xiao-AMD commited on
Commit
1a3ee7f
·
verified ·
1 Parent(s): e2816d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ license_link: https://github.com/MiniMax-AI/MiniMax-M2.5/blob/main/LICENSE
27
 
28
  # Model Quantization
29
 
30
- The model was quantized from [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) through a middle step using [QuixiAI/MiniMax-M2.1-bf16/minimax_to_bf16.py](https://huggingface.co/QuixiAI/MiniMax-M2.1-bf16/blob/main/minimax_to_bf16.py) using [AMD-Quark](https://quark.docs.amd.com/latest/index.html). The weights are quantized to MXFP4 and activations are quantized to MXFP4.
31
 
32
 
33
  **Quantization scripts:**
 
27
 
28
  # Model Quantization
29
 
30
+ The model was quantized from [MiniMaxAI/MiniMax-M2.5](https://huggingface.co/MiniMaxAI/MiniMax-M2.5) which was converted to bf16 using [QuixiAI/MiniMax-M2.1-bf16/minimax_to_bf16.py](https://huggingface.co/QuixiAI/MiniMax-M2.1-bf16/blob/main/minimax_to_bf16.py) using [AMD-Quark](https://quark.docs.amd.com/latest/index.html). The weights are quantized to MXFP4 and activations are quantized to MXFP4.
31
 
32
 
33
  **Quantization scripts:**