|
|
--- |
|
|
license: mit |
|
|
base_model: |
|
|
- deepseek-ai/DeepSeek-R1-0528 |
|
|
--- |
|
|
|
|
|
**Note that the MTP layers of this model are also PTPC-quantized.** |
|
|
|
|
|
# Model Overview |
|
|
|
|
|
- **Model Architecture:** DeepSeek-R1-0528 |
|
|
- **Input:** Text |
|
|
- **Output:** Text |
|
|
- **Supported Hardware Microarchitecture:** AMD MI350/MI355 |
|
|
- **ROCm**: 7.0 |
|
|
- **Operating System(s):** Linux |
|
|
- **Inference Engine:** [SGLang](https://docs.sglang.ai/)/[vLLM](https://docs.vllm.ai/en/latest/) |
|
|
- **Model Optimizer:** [AMD-Quark](https://quark.docs.amd.com/latest/index.html) (V0.10) |
|
|
- **Weight quantization:** Perchannel, FP8E4M3, Static |
|
|
- **Activation quantization:** Pertoken, FP8E4M3, Dynamic |
|
|
- **Calibration Dataset:** [Pile](https://huggingface.co/datasets/mit-han-lab/pile-val-backup) |
|
|
|
|
|
This model was built with deepseek-ai DeepSeek-R1-0528 model by applying [AMD-Quark](https://quark.docs.amd.com/latest/index.html) for FP8E4M3 PTPC quantization. |
|
|
|
|
|
# Model Quantization |
|
|
|
|
|
The model was quantized from [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) using [AMD-Quark](https://quark.docs.amd.com/latest/index.html). The weights are quantized to FP8 and activations are quantized to FP8. |
|
|
|
|
|
**Preprocessing requirement:** |
|
|
|
|
|
Before executing the quantization script below, the original FP8 model must first be dequantized to BFloat16. |
|
|
You can either perform the dequantization manually using this [conversion script](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/fp8_cast_bf16.py), or use the pre-converted BFloat16 model available at [unsloth/DeepSeek-R1-0528-BF16](https://huggingface.co/unsloth/DeepSeek-R1-0528-BF16). |
|
|
You need to manually modify the transformers so that it can load the MTP layer. You can also directly use our modified model [amd/DeepSeek-R1-0528-BF16](https://huggingface.co/amd/DeepSeek-R1-0528-BF16) to perform quantitative analysis. |
|
|
|
|
|
**Quantization scripts:** |
|
|
``` |
|
|
# pip install amd-quark |
|
|
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
from quark.torch import ModelQuantizer, export_safetensors |
|
|
from quark.torch.quantization import FP8E4M3PerChannelSpec |
|
|
from quark.torch.quantization.config.config import Config, QuantizationConfig |
|
|
|
|
|
ckpt_path = "amd/DeepSeek-R1-0528-BF16" |
|
|
exclude_layers = ["lm_head","*mlp.gate", "model.layers.61.eh_proj", "model.layers.61.shared_head.head"] |
|
|
output_dir = ckpt_path.rstrip("/").split("/")[-1] + "-ptpc" |
|
|
|
|
|
# Load the original floating-point model |
|
|
model = AutoModelForCausalLM.from_pretrained(ckpt_path, device_map="auto", torch_dtype="auto", trust_remote_code=True) |
|
|
model.eval() |
|
|
tokenizer = AutoTokenizer.from_pretrained(ckpt_path) |
|
|
|
|
|
# Set the quantization configuration |
|
|
FP8_PER_CHANNEL_SPEC = FP8E4M3PerChannelSpec(is_dynamic=False, ch_axis=0).to_quantization_spec() |
|
|
FP8_PER_TOKEN_DYNAMIC_SPEC = FP8E4M3PerChannelSpec(is_dynamic=True, ch_axis=1).to_quantization_spec() |
|
|
W_FP8_PER_CHANNEL_STATIC_A_FP8_PER_TOKEN_DYNAMIC_CONFIG = QuantizationConfig(input_tensors=FP8_PER_TOKEN_DYNAMIC_SPEC, weight=FP8_PER_CHANNEL_SPEC) |
|
|
quant_config = Config(global_quant_config=W_FP8_PER_CHANNEL_STATIC_A_FP8_PER_TOKEN_DYNAMIC_CONFIG, exclude=exclude_layers) |
|
|
|
|
|
# Apply quantization |
|
|
quantizer = ModelQuantizer(quant_config) |
|
|
model = quantizer.quantize_model(model) |
|
|
|
|
|
# Export quantized model |
|
|
model = quantizer.freeze(model) |
|
|
export_safetensors(model, output_dir) |
|
|
tokenizer.save_pretrained(output_dir) |
|
|
``` |
|
|
|
|
|
# Deployment |
|
|
|
|
|
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backends. |
|
|
|
|
|
# License |
|
|
Modifications Copyright(c) 2025 Advanced Micro Devices, Inc. All rights reserved. |