Qwen3.5-2B SFT Merged
This model is a merged supervised fine-tuned version of Qwen/Qwen3.5-2B.
It was fine-tuned with QLoRA / LoRA and then merged into the base model for direct inference.
Model Summary
- Base model:
Qwen/Qwen3.5-2B - Fine-tuning method: QLoRA 4-bit + LoRA
- Output type: merged full model
- Primary focus: math reasoning and general reasoning
Training Data
This model was trained on a mixed supervised fine-tuning dataset with emphasis on math reasoning.
Main datasets used include:
nvidia/OpenMathReasoningjasonrqh/Math-CoT-20k
Additional instruction and domain datasets were also used in the training pipeline in this project.
Intended Use
This model is intended for:
- math problem solving
- reasoning-heavy instruction following
- general text generation
It is not specifically optimized for factual QA, safety-critical domains, or tool use.
Benchmark Snapshot
Raw vs merged comparisons from this project:
| Benchmark | Raw Base | Merged Model |
|---|---|---|
| GSM8K | 0.66 | 0.74 |
| MATH-500 | 0.27 | 0.33 |
| ARC-Challenge | 0.21 | 0.29 |
| CommonsenseQA | 0.21 | 0.28 |
| BoolQ | 0.75 | 0.74 |
| WinoGrande | 0.52 | 0.51 |
These are small project-side comparisons, not an official leaderboard submission.
How to Use
Transformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "YOUR_USERNAME/YOUR_MODEL_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float16,
device_map="auto",
trust_remote_code=True,
)
messages = [
{"role": "system", "content": "You are a careful reasoning assistant."},
{"role": "user", "content": "Solve: If 3x + 5 = 20, what is x? End with Final answer: <answer>"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.inference_mode():
outputs = model.generate(
**inputs,
max_new_tokens=256,
do_sample=False,
temperature=0.0,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
generated = outputs[0][inputs["input_ids"].shape[1]:]
print(tokenizer.decode(generated, skip_special_tokens=True))
## Limitations
- Performance is strongest on reasoning-style prompts close to the SFT data distribution.
- Gains are not uniform across all reasoning benchmarks.
- Some benchmark improvements may reflect output-format adaptation as well as reasoning improvement.
## Training / Eval Project
This model was trained and evaluated in a local project with raw-vs-merged comparisons on math and reasoning benchmarks including:
- GSM8K
- MATH-500
- Math-CoT-20k
- MMLU math
- BoolQ
- WinoGrande
- CommonsenseQA
- ARC-Challenge
- Downloads last month
- 400