kewang2 commited on
Commit
db6920b
·
verified ·
1 Parent(s): 1d97c92

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -26,7 +26,7 @@ This model is adapted from `unsloth/DeepSeek-R1-0528-BF16` by adding an MTP laye
26
  **Important Notes:**
27
  - When loading this model, you must set `trust_remote_code=True` to ensure that changes related to the MTP layer in `modeling_deepseek.py` take effect.
28
  - After loading this model with `transformers`, **evaluation should NOT be performed directly**. The reason is that the forward function for the added MTP layer in `modeling_deepseek.py` is implemented only for calibration during the quantization process, so computation is not guaranteed to be the same as the original DeepSeek-R1-0528.
29
- - Therefore, when quantizing with AMD-Quark, you **must add the `--skip_evaluation` option** to skip the evaluation step and only perform quantization.
30
 
31
  Below is an example of how to quantize this model:
32
 
 
26
  **Important Notes:**
27
  - When loading this model, you must set `trust_remote_code=True` to ensure that changes related to the MTP layer in `modeling_deepseek.py` take effect.
28
  - After loading this model with `transformers`, **evaluation should NOT be performed directly**. The reason is that the forward function for the added MTP layer in `modeling_deepseek.py` is implemented only for calibration during the quantization process, so computation is not guaranteed to be the same as the original DeepSeek-R1-0528.
29
+ - Therefore, when quantizing with [AMD-Quark](https://quark.docs.amd.com/latest/index.html), you **must add the `--skip_evaluation` option** to skip the evaluation step and only perform quantization.
30
 
31
  Below is an example of how to quantize this model:
32