Update README.md
Browse files
README.md
CHANGED
|
@@ -30,7 +30,7 @@ The model was quantized from [deepseek-ai/DeepSeek-R1-0528](https://huggingface.
|
|
| 30 |
**Preprocessing requirement:**
|
| 31 |
|
| 32 |
Before executing the quantization script below, the original FP8 model must first be dequantized to BFloat16.
|
| 33 |
-
You can either perform the dequantization manually using this [conversion script](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/fp8_cast_bf16.py), or use the pre-converted BFloat16 model available at [
|
| 34 |
|
| 35 |
**Quantization scripts:**
|
| 36 |
```
|
|
|
|
| 30 |
**Preprocessing requirement:**
|
| 31 |
|
| 32 |
Before executing the quantization script below, the original FP8 model must first be dequantized to BFloat16.
|
| 33 |
+
You can either perform the dequantization manually using this [conversion script](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/fp8_cast_bf16.py), or use the pre-converted BFloat16 model available at [amd/DeepSeek-R1-0528-BF16](https://huggingface.co/amd/DeepSeek-R1-0528-BF16).
|
| 34 |
|
| 35 |
**Quantization scripts:**
|
| 36 |
```
|