Meet7 0.6B

A general-purpose Non-Reasoning LoRA fine-tune of Qwen3-0.6B, trained in under 10 minutes on just 600 samples.

Benchmarks

Scores are acc_norm.

Task Shot Qwen3-0.6B Meet7 0.6B Δ
BoolQ 0-shot 0.3798 0.5554 +17.56%
ARC Easy 3-shot 0.3636 0.4394 +07.58%
ARC Challenge 3-shot 0.2952 0.3456 +05.04%
HellaSwag 3-shot 0.3956 0.4323 +03.67%
PIQA 0-shot 0.6338 0.6583 +02.45%
Winogrande 0-shot 0.5225 0.5201 −00.24%

Model Details

Developed by Ma7ee7
License Apache-2.0
Base model unsloth/Qwen3-0.6B-unsloth-bnb-4bit
Training time < 10 minutes
Training samples 600

Trained 2x faster with Unsloth and Hugging Face TRL.

Downloads last month
96
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Ma7ee7/Meet7_0.6b

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(64)
this model
Finetunes
1 model
Quantizations
3 models

Collection including Ma7ee7/Meet7_0.6b