kewang2 commited on
Commit
1d97c92
·
verified ·
1 Parent(s): 60834d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -2
README.md CHANGED
@@ -28,8 +28,6 @@ This model is adapted from `unsloth/DeepSeek-R1-0528-BF16` by adding an MTP laye
28
  - After loading this model with `transformers`, **evaluation should NOT be performed directly**. The reason is that the forward function for the added MTP layer in `modeling_deepseek.py` is implemented only for calibration during the quantization process, so computation is not guaranteed to be the same as the original DeepSeek-R1-0528.
29
  - Therefore, when quantizing with AMD-Quark, you **must add the `--skip_evaluation` option** to skip the evaluation step and only perform quantization.
30
 
31
- For further details or issues, please refer to the [AMD-Quark](https://quark.docs.amd.com/latest/index.html) documentation or contact the respective developers.
32
-
33
  Below is an example of how to quantize this model:
34
 
35
  ```bash
 
28
  - After loading this model with `transformers`, **evaluation should NOT be performed directly**. The reason is that the forward function for the added MTP layer in `modeling_deepseek.py` is implemented only for calibration during the quantization process, so computation is not guaranteed to be the same as the original DeepSeek-R1-0528.
29
  - Therefore, when quantizing with AMD-Quark, you **must add the `--skip_evaluation` option** to skip the evaluation step and only perform quantization.
30
 
 
 
31
  Below is an example of how to quantize this model:
32
 
33
  ```bash