Long Arithmetic โ€” Chunked

LoRA adapter for Qwen/Qwen2.5-1.5B-Instruct fine-tuned on long arithmetic via Algorithmic Template SFT.

Part of the Algorithmic SFT vs Distillation experiment studying whether deterministic algorithmic templates teach procedural reasoning more effectively than distillation from large reasoning models.

Training

Parameter Value
Base model Qwen/Qwen2.5-1.5B-Instruct
Method Algorithmic Template SFT
Framework LLaMA-Factory (SFT stage)
LoRA rank 64
LoRA target all linear layers
Learning rate 1e-4
Epochs 3
Batch size 4 (grad accum 4)
Cutoff length 32,768 tokens
Training data 5,000 deterministic chunked multiplication traces (d4)

Evaluation (v3, MAX_TOKENS=32768)

Split Accuracy
Test (in-distribution) 86.2%
Harder variant 13.2%
Structural OOD 0.0%

Notes

Weaker than standard variant. Same OOD failure.

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")
model = PeftModel.from_pretrained(base, "reasoning-degeneration-dev/algo-sft-long-arithmetic-chunked")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B-Instruct")

Related Datasets

Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for reasoning-degeneration-dev/algo-sft-long-arithmetic-chunked

Adapter
(754)
this model

Collection including reasoning-degeneration-dev/algo-sft-long-arithmetic-chunked