How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Thomaschtl/test3")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Thomaschtl/test3")
model = AutoModelForCausalLM.from_pretrained("Thomaschtl/test3")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

Qwen3-0.6B Quantized with QAT

This model is a quantized version of Qwen/Qwen3-0.6B using Quantization Aware Training (QAT) with Intel Neural Compressor.

πŸš€ Model Details

  • Base Model: Qwen/Qwen3-0.6B
  • Quantization Method: Quantization Aware Training (QAT)
  • Framework: Intel Neural Compressor
  • Model Size: Significantly reduced from original
  • Performance: Maintains quality while improving efficiency

πŸ“Š Benefits

βœ… Smaller model size - Reduced storage requirements
βœ… Faster inference - Optimized for deployment
βœ… Lower memory usage - More efficient resource utilization
βœ… Maintained quality - QAT preserves model performance

πŸ’» Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the quantized model
model = AutoModelForCausalLM.from_pretrained("Thomaschtl/qwen3-0.6b-qat-test")
tokenizer = AutoTokenizer.from_pretrained("Thomaschtl/qwen3-0.6b-qat-test")

# Generate text
prompt = "The future of AI is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

βš™οΈ Quantization Details

  • Training Method: Quantization Aware Training
  • Optimizer: AdamW
  • Learning Rate: 5e-5
  • Batch Size: 2
  • Epochs: 1 (demo configuration)

πŸ”§ Technical Info

This model was quantized using Intel Neural Compressor's QAT approach, which:

  1. Simulates quantization during training
  2. Allows model weights to adapt to quantization
  3. Maintains better accuracy than post-training quantization

πŸ“ Citation

If you use this model, please cite:

@misc{qwen3-qat,
  title={Qwen3-0.6B Quantized with QAT},
  author={Thomaschtl},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/Thomaschtl/qwen3-0.6b-qat-test}
}

βš–οΈ License

This model follows the same license as the base model (Apache 2.0).

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Thomaschtl/test3

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(852)
this model