Algorithmic SFT vs Distillation
Collection
10 LoRA adapters + 6 datasets. Algo template SFT vs QwQ distillation on Qwen2.5-1.5B-Instruct across 4 reasoning domains. โข 16 items โข Updated
LoRA adapter for Qwen/Qwen2.5-1.5B-Instruct fine-tuned on cellular automata via Algorithmic Template SFT.
Part of the Algorithmic SFT vs Distillation experiment studying whether deterministic algorithmic templates teach procedural reasoning more effectively than distillation from large reasoning models.
| Parameter | Value |
|---|---|
| Base model | Qwen/Qwen2.5-1.5B-Instruct |
| Method | Algorithmic Template SFT |
| Framework | LLaMA-Factory (SFT stage) |
| LoRA rank | 64 |
| LoRA target | all linear layers |
| Learning rate | 1e-4 |
| Epochs | 3 |
| Batch size | 4 (grad accum 4) |
| Cutoff length | 32,768 tokens |
| Training data | 5,000 deterministic step-by-step simulation traces (d5: all 256 rules, 16-20 cells, 3-5 steps) |
| Split | Accuracy |
|---|---|
| Test (in-distribution) | 94.6% |
| Harder variant | 3.4% |
| Structural OOD | 72.0% (Rule 110, never seen) |
Learned to read and apply any rule from lookup table. Generalizes to novel rules (72% OOD) but struggles with multi-step on larger grids (3.4% harder).
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")
model = PeftModel.from_pretrained(base, "reasoning-degeneration-dev/algo-sft-cellular-automata-step-simulation-d5")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")