Algorithmic SFT vs Distillation
Collection
10 LoRA adapters + 6 datasets. Algo template SFT vs QwQ distillation on Qwen2.5-1.5B-Instruct across 4 reasoning domains. โข 16 items โข Updated
LoRA adapter for Qwen/Qwen2.5-1.5B-Instruct fine-tuned on long arithmetic via Algorithmic Template SFT.
Part of the Algorithmic SFT vs Distillation experiment studying whether deterministic algorithmic templates teach procedural reasoning more effectively than distillation from large reasoning models.
| Parameter | Value |
|---|---|
| Base model | Qwen/Qwen2.5-1.5B-Instruct |
| Method | Algorithmic Template SFT |
| Framework | LLaMA-Factory (SFT stage) |
| LoRA rank | 64 |
| LoRA target | all linear layers |
| Learning rate | 1e-4 |
| Epochs | 3 |
| Batch size | 4 (grad accum 4) |
| Cutoff length | 32,768 tokens |
| Training data | 5,000 deterministic carry-propagation traces (d4: 3-digit x 2-3 digit multiply) |
| Split | Accuracy |
|---|---|
| Test (in-distribution) | 92.6% |
| Harder variant | 21.2% |
| Structural OOD | 0.0% (chain operations) |
Strong in-distribution but collapses on structural OOD (chain operations). Neither algo nor distill generalizes here.
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")
model = PeftModel.from_pretrained(base, "reasoning-degeneration-dev/algo-sft-long-arithmetic-standard")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")