gpt-oss-20b-WFP8-AFP8-KVFP8

  • Introduction

    This model was quantized from openai/gpt-oss-20b using AMD-Quark with calibration samples from the Pile dataset.

  • Quantization schemes

    • Quantized Layers: All linear (both attention linear and MoE linear) layers excluding lm_head
    • Weight: quantized using FP8 symmetric per-tensor scheme
    • Activation: quantized using FP8 symmetric per-tensor scheme
    • KV Cache: FP8 symmetric per-tensor
  • Quantization script

python examples/torch/language_modeling/llm_ptq/quantize_quark.py \
    --multi_gpu \
    --model_dir openai/gpt-oss-20b \
    --quant_scheme w_fp8_a_fp8 \
    --kv_cache_dtype fp8 \
    --exclude_layers lm_head \
    --dataset pileval \
    --num_calib_data 128 \
    --output_dir amd/gpt-oss-20b-WFP8-AFP8-KVFP8 \
    --model_export hf_format \
    --skip_evaluation

Deployment

This model supports deployment through the vLLM backend. Please ensure PR#29008, PR#31962 have been correctly applied.

Evaluation

This model is evaluated on gpqa_diamond_generative_n_shot and gsm8k_platinum tasks using the lm-evaluation-harness framework with vllm backend.

Evaluation scores

Model name Weight Activation KV cache Exclude gpqa_diamond_generative_n_shot (5) gsm8k_platinum
TP1 TP2 TP4 TP8 TP1 TP2 TP4 TP8
openai/gpt-oss-20b MXFP4 BF16 BF16 - 0.5606 0.5303 0.5657 0.5606 0.9016 0.9024 0.9032 0.8966
amd/gpt-oss-20b-WFP8-AFP8-KVFP8 FP8 FP8 FP8 *bias, lm_head 0.5505 0.5556 0.5253 0.5253 0.9024 0.9107 0.9024 0.8983

Disclaimer

This model is intentionally quantized for the vLLM's CI test usage (tests/models/quantization/test_gpt_oss.py). Model performances are not guaranteed to be optimal.

License

Modifications copyright(c) 2026 Advanced Micro Devices,Inc. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Downloads last month
71
Safetensors
Model size
21B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for amd/gpt-oss-20b-WFP8-AFP8-KVFP8

Base model

openai/gpt-oss-20b
Quantized
(154)
this model