File size: 3,395 Bytes
9cc842e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
license: cc-by-nc-4.0
datasets:
- openai/gsm8k
language:
- en
base_model:
- Qwen/Qwen2.5-Math-1.5B
pipeline_tag: text-generation
library_name: transformers
tags:
- math
- qwen
- lora
- mathematics
- gsm8k
---

# OpenMath  
Fine-tuning a Small Language Model (SLM) for Step-by-Step Math Reasoning  

## Overview  
OpenMath is an open-source project focused on fine-tuning a small language model for math reasoning using QLoRA (4-bit LoRA).  

This repository contains only a LoRA adapter trained on GSM8K. Users must load the base model separately and attach the adapter.  

The latest version of this model was trained on an AMD MI300X GPU using ROCm, showing that modern non-NVIDIA accelerators can successfully support large-scale fine-tuning with Hugging Face and PyTorch.  

---

## Base Model  
Qwen/Qwen2.5-Math-1.5B  

This repository does not contain the base model weights — they must be loaded from Hugging Face.  

---

## Hardware Used (Latest Training Run)  

GPU: AMD MI300X (ROCm 7.0)  
VRAM: 192 GB  
Operating System: Ubuntu 24.04  
Framework: PyTorch + Hugging Face  
Backend: ROCm  

---

## Dataset  

GSM8K (Grade School Math 8K)  
Training samples: 1,000  
Evaluation: Full GSM8K test split (1,319 problems)  

Only the solution portion of each example was used for loss computation through loss masking.  

---

## Training Configuration  

Method: QLoRA (4-bit)  
Quantization: NF4 with float16 compute  
LoRA rank: 16  
LoRA alpha: 32  
LoRA dropout: 0.05  
Target modules: q_proj, k_proj, v_proj, o_proj  
Max sequence length: 1024  
Batch size: 1  
Gradient accumulation: 16  
Effective batch size: 16  
Learning rate: 1e-4  
Optimizer: paged_adamw_8bit  
Scheduler: cosine  
Warmup: 5 percent  
Epochs: 6  

---

## Results  

GSM8K Accuracy (Full Test Set):  
750 out of 1319 correct, which equals 56.86 percent accuracy.  

This is significantly stronger than the earlier Colab T4 run and is a strong result for a 1.5B model trained with LoRA.  

---

## What This Repository Contains  

adapter_model.safetensors — LoRA weights  
adapter_config.json — LoRA configuration  
chat_template.jinja — chat formatting template  
tokenizer.json — tokenizer file  
tokenizer_config.json — tokenizer settings  
README.md — documentation  

This repository does not include checkpoints, optimizer states, or full base model weights.  

---

## How to Use This Model  

Load the base model Qwen/Qwen2.5-Math-1.5B from Hugging Face, then attach this LoRA adapter using PEFT. Generate answers using a prompt that includes an instruction, problem, and solution section.  

---

## Why This Matters  

This project demonstrates that AMD MI300X can train modern language models with Hugging Face and QLoRA.  
It shows that high-quality math reasoning is possible at 1.5B parameters using efficient fine-tuning.  
It provides a lightweight adapter instead of requiring users to download a massive full model.  

---

## Limitations  

The model can make reasoning mistakes.  
It should not be used for exams, assignments, or professional decisions.  
Performance depends heavily on prompt formatting.  

---

## Future Work  

Train on 3,000 to 5,000 GSM8K samples.  
Add SVAMP and ASDiv datasets.  
Improve decoding to reduce repetition.  
Experiment with multi-GPU scaling on MI300X.  
Add a Streamlit demo for interactive use.  

---

## License  

cc-by-nc-4.0