Quark Quantized MXFP4 models
Collection
38 items • Updated
This model was built with Kimi-K2-Instruct model by applying AMD-Quark for MXFP4 quantization.
The model was quantized from unsloth/Kimi-K2-Instruct-0905-BF16 using AMD-Quark. The weights and activations are quantized to MXFP4.
Quantization scripts:
cd Quark/examples/torch/language_modeling/llm_ptq/
exclude_layers="*self_attn* *mlp.gate *lm_head *mlp.gate_proj *mlp.up_proj *mlp.down_proj *shared_experts*"
python quantize_quark.py \
--model_dir unsloth/Kimi-K2-Instruct-0905-BF16 \
--quant_scheme mxfp4 \
--exclude_layers $exclude_layers \
--output_dir amd/Kimi-K2-Instruct-0905-MXFP4 \
--file2file_quantization
This model can be deployed efficiently using the vLLM backend.
The model was evaluated on GSM8K benchmarks.
| Benchmark | Kimi-K2-Instruct-0905 | Kimi-K2-Instruct-0905-MXFP4(this model) | Recovery |
| GSM8K (strict-match) | 95.53 | 94.47 | 98.89% |
The GSM8K results were obtained using the lm-evaluation-harness framework, based on the Docker image rocm/vllm-private:vllm_dev_base_mxfp4_20260122, with vLLM and lm-eval compiled and installed from source inside the image.
export VLLM_ATTENTION_BACKEND="TRITON_MLA"
export VLLM_ROCM_USE_AITER=1
export VLLM_ROCM_USE_AITER_FUSION_SHARED_EXPERTS=0
vllm serve amd/Kimi-K2-Instruct-0905-MXFP4 \
--port 8000 \
--served-model-name kimi-k2-mxfp4 \
--trust-remote-code \
--tensor-parallel-size 8 \
--enable-auto-tool-choice \
--tool-call-parser kimi_k2
lm_eval \
--model local-completions \
--model_args "model=kimi-k2-mxfp4,base_url=http://0.0.0.0:8000/v1/completions,tokenized_requests=False,tokenizer_backend=None,num_concurrent=32" \
--tasks gsm8k \
--num_fewshot 5 \
--batch_size 1
Modifications Copyright(c) 2025 Advanced Micro Devices, Inc. All rights reserved.
Base model
moonshotai/Kimi-K2-Instruct-0905