ReasoningQAT-Qwen3-8B-2bit

This model is a 2-bit pseudo-quantized version of Qwen3-8B, trained with Quantization-Aware Training (QAT) for reasoning tasks.

Details

  • Base model: Qwen3-8B
  • Quantization: W2G128 (2-bit weights, group size 128)
  • Format: Pseudo-quantized (stored in FP16; weights lie on 2-bit quantization grids)
  • Method: ReasoningQAT — QAT combining knowledge distillation with teacher-confidence-weighted DFT loss, trained end-to-end on reasoning data

Citation

@inproceedings{
  okoshi2026towards,
  title={Towards Quantization-Aware Training for Ultra-Low-Bit Reasoning {LLM}s},
  author={Yasuyuki Okoshi and Hikari Otsuka and Daichi Fujiki and Masato Motomura},
  booktitle={The Fourteenth International Conference on Learning Representations},
  year={2026},
  url={https://openreview.net/forum?id=Azsd2qyK6C}
}
Downloads last month
3
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kusakana/ReasoningQAT-Qwen3-8B-2bit

Finetuned
Qwen/Qwen3-8B
Finetuned
(1455)
this model

Collection including kusakana/ReasoningQAT-Qwen3-8B-2bit