linzhao-amd's picture
Update README.md
a81beca verified
---
base_model:
- Qwen/Qwen3-Coder-480B-A35B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
---
# Model Overview
- **Model Architecture:** Qwen3MoeForCausalLM
- **Input:** Text
- **Output:** Text
- **Supported Hardware Microarchitecture:** AMD MI300 MI350/MI355
- **ROCm**: 7.0
- **PyTorch**: 2.8.0
- **Transformers**: 4.57.6
- **Operating System(s):** Linux
- **Inference Engine:** [SGLang](https://docs.sglang.ai/)/[vLLM](https://docs.vllm.ai/en/latest/)
- **Model Optimizer:** [AMD-Quark](https://quark.docs.amd.com/latest/index.html) (v0.11)
- **Weight quantization:** OCP MXFP4, Static
- **Activation quantization:** OCP MXFP4, Dynamic
# Model Quantization
The model was quantized from [Qwen/Qwen3-Coder-480B-A35B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct) using [AMD-Quark](https://quark.docs.amd.com/latest/index.html). The weights are quantized to MXFP4 and activations are quantized to MXFP4.
**Quantization scripts:**
```
cd Quark/examples/torch/language_modeling/llm_ptq/
export exclude_layers="lm_head *self_attn* *mlp.gate"
python3 quantize_quark.py --model_dir $MODEL_DIR \
--quant_scheme mxfp4 \
--num_calib_data 128 \
--exclude_layers $exclude_layers \
--skip_evaluation \
--multi_gpu \
--trust_remote_code \
--model_export hf_format \
--output_dir $output_dir
```
For further details or issues, please refer to the AMD-Quark documentation or contact the respective developers.
# Evaluation
The model was evaluated on gsm8k benchmarks using the [vllm](https://github.com/vllm-project/vllm/tree/v0.13.0) framework.
### Accuracy
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Qwen/Qwen3-Coder-480B-A35B-Instruct </strong>
</td>
<td><strong>amd/Qwen3-Coder-480B-A35B-Instruct-MXFP4(this model)</strong>
</td>
<td><strong>Recovery</strong>
</td>
</tr>
<tr>
<td>gsm8k (flexible-extract)
</td>
<td>0.8893
</td>
<td>0.8954
</td>
<td>100%
</td>
</tr>
</table>
### Reproduction
The GSM8K results were obtained using the vLLM framework, based on the Docker image `rocm/vllm:rocm7.0.0_vllm_0.11.2_20251210`, and vLLM is installed from source inside the container.
#### Preparation in container
```
# Reinstall vLLM
pip uninstall vllm -y
git clone https://github.com/vllm-project/vllm.git
cd vllm
git checkout v0.13.0
pip install -r requirements/rocm.txt
python setup.py develop
cd ..
```
#### Launching server
```
VLLM_ROCM_USE_AITER=1 \
VLLM_DISABLE_COMPILE_CACHE=1 \
vllm serve "$MODEL" \
--tensor-parallel-size 4 \
--trust-remote-code \
--max-model-len 32768 \
--port 8899
```
#### Evaluating model in a new terminal
```
python vllm/tests/evals/gsm8k/gsm8k_eval.py --host http://127.0.0.1 --port 8899 --num-questions 1000 --save-results logs
```
# License
Modifications Copyright(c) 2026 Advanced Micro Devices, Inc. All rights reserved.