Long Arithmetic โ€” Standard

LoRA adapter for Qwen/Qwen2.5-1.5B-Instruct fine-tuned on long arithmetic via Algorithmic Template SFT.

Part of the Algorithmic SFT vs Distillation experiment studying whether deterministic algorithmic templates teach procedural reasoning more effectively than distillation from large reasoning models.

Training

Parameter Value
Base model Qwen/Qwen2.5-1.5B-Instruct
Method Algorithmic Template SFT
Framework LLaMA-Factory (SFT stage)
LoRA rank 64
LoRA target all linear layers
Learning rate 1e-4
Epochs 3
Batch size 4 (grad accum 4)
Cutoff length 32,768 tokens
Training data 5,000 deterministic carry-propagation traces (d4: 3-digit x 2-3 digit multiply)

Evaluation (v3, MAX_TOKENS=32768)

Split Accuracy
Test (in-distribution) 92.6%
Harder variant 21.2%
Structural OOD 0.0% (chain operations)

Notes

Strong in-distribution but collapses on structural OOD (chain operations). Neither algo nor distill generalizes here.

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")
model = PeftModel.from_pretrained(base, "reasoning-degeneration-dev/algo-sft-long-arithmetic-standard")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")

Related Datasets

Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for reasoning-degeneration-dev/algo-sft-long-arithmetic-standard

Adapter
(761)
this model

Collection including reasoning-degeneration-dev/algo-sft-long-arithmetic-standard