Benchmark

Capability Benchmark Qwen3-14B-FP8 Qwen3-14B-INT8
aime25@mean_acc 0.4778 0.4222
gpqa_diamond@mean_acc 0.6195 0.6229
ifeval@mean_prompt_level_strict 0.8324 0.8306
live_code_bench@mean_acc 0.6423 0.643
mmlu_pro@mean_acc 0.7493 0.7551
  • 查找"stop_reason": "max_tokens"的sample,发现均为重复回答至max_token,未发现有正常回答至最长的情况
  • repeats : 3
  • enable_thinking: True
  • evalscope version: 1.4.2

Reference

Downloads last month
-
Safetensors
Model size
15B params
Tensor type
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lancew/Qwen3-14B-GPTQ-INT8

Finetuned
Qwen/Qwen3-14B
Quantized
(158)
this model