Llama 3.2 1B - Arithmetic RL LoRA
LoRA adapter trained with Tinker (by Thinking Machines) using reinforcement learning (GRPO) on arithmetic tasks.
Training Details
- Base model: meta-llama/Llama-3.2-1B
- Method: GRPO (Group Relative Policy Optimization)
- Task: Arithmetic (addition)
- LoRA rank: 32, alpha: 32
- Target modules: all-linear
- Learning rate: 1e-4
- Group size: 4, Groups per batch: 100
Results
| Metric | Start | Final |
|---|---|---|
| Accuracy | 69.5% | 100% |
| Reward | 0.676 | 1.0 |
| Steps to converge | - | ~20 |
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B")
model = PeftModel.from_pretrained(base, "arvindcr4/llama-3.2-1b-arithmetic-rl-lora")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
Platform
Trained using Tinker - hosted fine-tuning service for open-source LLMs.
- Downloads last month
- 17
Model tree for arvindcr4/llama-3.2-1b-arithmetic-rl-lora
Base model
meta-llama/Llama-3.2-1B