haoyang-amd commited on
Commit
5d5854a
·
verified ·
1 Parent(s): a70731e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -30,7 +30,7 @@ The model was quantized from [deepseek-ai/DeepSeek-R1-0528](https://huggingface.
30
  **Preprocessing requirement:**
31
 
32
  Before executing the quantization script below, the original FP8 model must first be dequantized to BFloat16.
33
- You can either perform the dequantization manually using this [conversion script](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/fp8_cast_bf16.py), or use the pre-converted BFloat16 model available at [unsloth/DeepSeek-R1-0528-BF16](https://huggingface.co/unsloth/DeepSeek-R1-0528-BF16).
34
 
35
  **Quantization scripts:**
36
  ```
 
30
  **Preprocessing requirement:**
31
 
32
  Before executing the quantization script below, the original FP8 model must first be dequantized to BFloat16.
33
+ You can either perform the dequantization manually using this [conversion script](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/fp8_cast_bf16.py), or use the pre-converted BFloat16 model available at [amd/DeepSeek-R1-0528-BF16](https://huggingface.co/amd/DeepSeek-R1-0528-BF16).
34
 
35
  **Quantization scripts:**
36
  ```