Model Details

This model is an mxfp8 quantized version of Qwen/Qwen3-235B-A22B generated by intel/auto-round. Please follow the license of the original model.

How to Use

The step-by-step README of quantization and evaluation can be found in Intel Neural Compressor Examples.

Evaluate Results

Task backend BF16 MXFP8
hellaswag vllm 0.6794 0.6768
piqa vllm 0.8177 0.8221
mmlu vllm 0.8492 0.8472
gsm8k vllm 0.9242 0.9325
average vllm 0.8176 0.8196

Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Downloads last month
5
Safetensors
Model size
1B params
Tensor type
BF16
F8_E4M3
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for INCModel1/Qwen3-235B-A22B-MXFP8-AutoRound

Quantized
(48)
this model