Quark Quantized MXFP4 models
Collection
29 items • Updated
The model was quantized from Qwen/Qwen3.5-397B-A17B-FP8 using AMD-Quark. The weights are quantized to MXFP4 and activations are quantized to MXFP4.
Quantization scripts:
import os
from quark.torch import LLMTemplate, ModelQuantizer
from quark.common.profiler import GlobalProfiler
# Register qwen3_5_moe template
qwen3_5_moe_template = LLMTemplate(
model_type="qwen3_5_moe",
kv_layers_name=["*k_proj", "*v_proj"],
q_layer_name="*q_proj"
)
LLMTemplate.register_template(qwen3_5_moe_template)
# Configuration
ckpt_path = "Qwen/Qwen3.5-397B-A17B-FP8"
output_dir = "amd/Qwen3.5-397B-A17B-MXFP4"
quant_scheme = "mxfp4"
exclude_layers = ["lm_head", "model.visual.*", "mtp.*", "*mlp.gate", "*shared_expert_gate*", "*.linear_attn.*", "*.self_attn.*", "*.shared_expert.*"]
# Get quant config from template
template = LLMTemplate.get("qwen3_5_moe")
quant_config = template.get_config(scheme=quant_scheme, exclude_layers=exclude_layers)
# Quantize with File-to-file mode
profiler = GlobalProfiler(output_path=os.path.join(output_dir, "quark_profile.yaml"))
quantizer = ModelQuantizer(quant_config)
quantizer.direct_quantize_checkpoint(
pretrained_model_path=ckpt_path,
save_path=output_dir,
)
For further details or issues, please refer to the AMD-Quark documentation or contact the respective developers.
The model was evaluated on gsm8k benchmarks using the vllm framework.
| Benchmark | Qwen/Qwen3.5-397B-A17B-FP8 | amd/Qwen3.5-397B-A17B-MXFP4(this model) | Recovery |
| gsm8k (flexible-extract) | 95.38 | 94.24 | 98.80% |
The GSM8K results were obtained using the vLLM framework, based on the Docker image rocm/vllm-dev:nightly_main_20260211, and vLLM is installed inside the container.
lm_eval \
--model vllm \
--model_args pretrained=$MODEL,tensor_parallel_size=1,max_model_len=262144,gpu_memory_utilization=0.90,max_gen_toks=2048,trust_remote_code=True,reasoning_parser=qwen3 \
--tasks gsm8k --num_fewshot 5 \
--batch_size auto
Modifications Copyright(c) 2026 Advanced Micro Devices, Inc. All rights reserved.