Model Overview
- Model Architecture: DeepSeek-V3.2
- Input: Text
- Output: Text
- Supported Hardware Microarchitecture: AMD MI350/MI355
- ROCm: 7.0
- PyTorch: 2.8.0
- Transformers: 4.53.0
- Operating System(s): Linux
- Inference Engine: SGLang/vLLM
- Model Optimizer: AMD-Quark (V0.10)
- Weight quantization: OCP MXFP4, Static
- Activation quantization: OCP MXFP4, Dynamic
- Calibration Dataset: Pile
This model was built with deepseek-ai DeepSeek-V3.2 model by applying AMD-Quark for MXFP4 quantization.
Model Quantization
The model was quantized from deepseek-ai/DeepSeek-V3.2 using AMD-Quark. Both weights and activations were quantized to MXFP4 format.
Deployment
This model can be deployed efficiently using the SGLang and vLLM backends.
Evaluation
export VLLM_USE_V1=1
export SAFETENSORS_FAST_GPU=1
export VLLM_ROCM_USE_AITER=1
export VLLM_ROCM_USE_AITER_MOE=1
export VLLM_ROCM_USE_AITER_FP8BMM=0
export VLLM_ROCM_USE_AITER_FP4BMM=0
model_path="/shareddata/deepseek-ai/DeepSeek-V3.2-mxfp4"
vllm serve $model_path \
--tensor-parallel-size 4 \
--data-parallel-size 1 \
--max-num-batched-tokens 32768 \
--trust-remote-code \
--no-enable-prefix-caching \
--disable-log-requests \
--kv-cache-dtype bfloat16 \
--gpu_memory_utilization 0.85 \
--compilation-config '{"cudagraph_mode": "FULL_AND_PIECEWISE"}' \
--block-size 1
lm_eval \
--model local-completions \
--tasks gsm8k \
--model_args model=/shareddata/deepseek-ai/DeepSeek-V3.2-mxfp4,base_url=http://127.0.0.1:8000/v1/completions \
--batch_size auto \
Accuracy
| Benchmark | DeepSeek-V3.2 | DeepSeek-V3.2-mxfp4(this model) | Recovery |
| gsm8k (flexible-extract) | 95.68 | 95.38 | 99.68% |
Reproduction
docker: rocm/vllm-private:nightly-dpskv3.2-mxfp4
vllm main: 0900cedb3f89e475bea256c4cf5a13b5f02635bc
License
Modifications Copyright(c) 2025 Advanced Micro Devices, Inc. All rights reserved.
- Downloads last month
- 251
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for amd/DeepSeek-V3.2-mxfp4
Base model
deepseek-ai/DeepSeek-V3.2-Exp-Base
Finetuned
deepseek-ai/DeepSeek-V3.2