File size: 8,050 Bytes
3ac84e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b926e35
 
 
 
3ac84e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f9b0fb
3ac84e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f9b0fb
 
 
 
3ac84e6
 
 
 
 
 
 
6f9b0fb
3ac84e6
6f9b0fb
3ac84e6
6f9b0fb
3ac84e6
 
 
 
 
 
 
 
 
 
 
ccbf3bc
3ac84e6
d4d48c0
 
3ac84e6
d4d48c0
3ac84e6
d4d48c0
3ac84e6
d4d48c0
 
 
 
 
 
 
3ac84e6
d4d48c0
3ac84e6
d4d48c0
3ac84e6
d4d48c0
3ac84e6
d4d48c0
3ac84e6
d4d48c0
 
 
 
 
 
 
 
3ac84e6
d4d48c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3ac84e6
 
 
6f9b0fb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3ac84e6
 
d4d48c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3ac84e6
 
 
 
 
 
d4d48c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
---
library_name: peft
license: mit
base_model: microsoft/Phi-4-mini-instruct
tags:
- axolotl
- base_model:adapter:microsoft/Phi-4-mini-instruct
- lora
- transformers
datasets:
- DannyAI/African-History-QA-Dataset
pipeline_tag: text-generation
model-index:
- name: phi4_lora_axolotl
  results: []
language:
- en
metrics:
- bertscore
---

[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>

axolotl version: `0.14.0.dev0`
```yaml
base_model: microsoft/Phi-4-mini-instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

# 1. Dataset Configuration
datasets:
  - path: DannyAI/African-History-QA-Dataset
    split: train
    type: alpaca_chat.load_qa
    system_prompt: "You are a helpful AI assistant specialised in African history which gives concise answers to questions asked"
test_datasets:
  - path: DannyAI/African-History-QA-Dataset
    split: validation
    type: alpaca_chat.load_qa
    # Fixed the missing quote and indentation below
    system_prompt: "You are a helpful AI assistant specialised in African history which gives concise answers to questions asked"

# 2. Output & Chat Configuration
output_dir: ./phi4_african_history_lora_out
chat_template: tokenizer_default
train_on_inputs: false

# 3. Batch Size Configuration
micro_batch_size: 2
gradient_accumulation_steps: 4

# 4. LoRA Configuration
adapter: lora
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: [q_proj, v_proj, k_proj, o_proj]

# 5. Hardware & Efficiency
sequence_len: 2048
sample_packing: true
eval_sample_packing: false 
pad_to_sequence_len: true
bf16: true
fp16: false

# 6. Training Duration & Optimizer
max_steps: 650  
# removed
# num_epochs: 
warmup_steps: 20
learning_rate: 0.00002
optimizer: adamw_torch 
lr_scheduler: cosine

# 7. Logging & Evaluation
wandb_project: phi4_african_history
wandb_name: phi4_lora_axolotl

eval_strategy: steps
eval_steps: 50
save_strategy: steps
save_steps: 100
logging_steps: 5

# 8. Public Hugging Face Hub Upload
hub_model_id: DannyAI/phi4_lora_axolotl
push_adapter_to_hub: true
hub_private_repo: false

```

</details><br>

# Model Card for Model ID

This is a LoRA fine-tuned version of **microsoft/Phi-4-mini-instruct** for African History using the **DannyAI/African-History-QA-Dataset** dataset.
It achieves a loss value of 1.7479 on the validation set

## Model Details

### Model Description

- **Developed by:** Daniel Ihenacho
- **Funded by:** Daniel Ihenacho
- **Shared by:** Daniel Ihenacho
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** mit
- **Finetuned from model:** microsoft/Phi-4-mini-instruct

## Uses

This can be used for QA datasets about African History

### Out-of-Scope Use

Can be used beyond African History but should not.

## How to Get Started with the Model

```python
from transformers import pipeline
from transformers import (
    AutoTokenizer, 
    AutoModelForCausalLM)
from peft import PeftModel


model_id = "microsoft/Phi-4-mini-instruct"

tokeniser = AutoTokenizer.from_pretrained(model_id)

# load base model
model  = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map = "auto",
    torch_dtype = torch.bfloat16,
    trust_remote_code = False
)

# Load the fine-tuned LoRA model
lora_id = "DannyAI/phi4_lora_axolotl"
lora_model = PeftModel.from_pretrained(
    model,lora_id
)

generator = pipeline(
    "text-generation",
    model=lora_model,
    tokenizer=tokeniser,
)
question = "What is the significance of African feminist scholarly activism in contemporary resistance movements?"
def generate_answer(question)->str:
    """Generates an answer for the given question using the fine-tuned LoRA model.
    """
    messages = [
        {"role": "system", "content": "You are a helpful AI assistant specialised in African history which gives concise answers to questions asked."},
        {"role": "user", "content": question}
    ]
    
    output = generator(
        messages, 
        max_new_tokens=2048, 
        temperature=0.1, 
        do_sample=False,
        return_full_text=False
    )
    return output[0]['generated_text'].strip()
```
```
# Example output
African feminist scholarly activism is significant in contemporary resistance movements as it provides a critical framework for understanding and addressing the specific challenges faced by African women in the context of global capitalism, neocolonialism, and patriarchal structures.
```

## Training Details

### Training results

| Training Loss | Epoch   | Step | Validation Loss | Ppl    | Active (gib) | Allocated (gib) | Reserved (gib) |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------------:|:---------------:|:--------------:|
| No log        | 0       | 0    | 2.1184          | 8.3175 | 14.82        | 14.82           | 15.37          |
| 5.394         | 3.8627  | 50   | 2.1004          | 8.1694 | 14.84        | 14.84           | 31.82          |
| 4.4484        | 7.7059  | 100  | 2.0367          | 7.6652 | 14.84        | 14.84           | 31.84          |
| 3.7583        | 11.5490 | 150  | 1.9785          | 7.2316 | 14.84        | 14.84           | 31.84          |
| 3.363         | 15.3922 | 200  | 1.9299          | 6.8886 | 14.84        | 14.84           | 31.84          |
| 3.0568        | 19.2353 | 250  | 1.8664          | 6.4652 | 14.84        | 14.84           | 31.84          |
| 2.8736        | 23.0784 | 300  | 1.8134          | 6.1314 | 14.84        | 14.84           | 31.79          |
| 2.7646        | 26.9412 | 350  | 1.7851          | 5.9604 | 14.84        | 14.84           | 31.79          |
| 2.6891        | 30.7843 | 400  | 1.7668          | 5.8523 | 14.84        | 14.84           | 31.79          |
| 2.6843        | 34.6275 | 450  | 1.7581          | 5.8014 | 14.84        | 14.84           | 31.79          |
| 2.6048        | 38.4706 | 500  | 1.7534          | 5.7739 | 14.84        | 14.84           | 31.79          |
| 2.6118        | 42.3137 | 550  | 1.7505          | 5.7573 | 14.84        | 14.84           | 31.79          |
| 2.6024        | 46.1569 | 600  | 1.7503          | 5.7565 | 14.84        | 14.84           | 31.79          |
| 2.5727        | 50.0    | 650  | 1.7479          | 5.7428 | 14.84        | 14.84           | 31.79          |



### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 650

### Lora Configuration
- r: 8
- lora_alpha: 16
- target_modules: ["q_proj", "v_proj", "k_proj", "o_proj"]
- lora_dropout: 0.05 # dataset is small, hence a low dropout value
- bias: "none"
- task_type: "CAUSAL_LM"

## Evaluation

#### Metrics
| Models | Bert Score | TinyMMLU| TinyTrufulQA
|------|--------------|----------------|----------------|
| Base model | 0.88868 | 0.6837 |0.49745|
| Fine tuned Model | 0.88981 | 0.67371 |0.46626|

## Compute Infrastructure

[Runpod](https://console.runpod.io/).

### Hardware

Runpod A40 GPU instance

### Framework versions

- PEFT 0.18.1
- Transformers 4.57.6
- Pytorch 2.9.1+cu128
- Datasets 4.5.0
- Tokenizers 0.22.2


## Citation

If you use this dataset, please cite:
```
@Model{
Ihenacho2026phi4_lora_axolotl,
  author    = {Daniel Ihenacho},
  title     = {phi4_lora_axolotl},
  year      = {2026},
  publisher = {Hugging Face Models},
  url       = {https://huggingface.co/DannyAI/phi4_lora_axolotl},
  urldate   = {2026-01-27},
}
```

## Model Card Authors

Daniel Ihenacho

## Model Card Contact

- [LinkedIn](https://www.linkedin.com/in/daniel-ihenacho-637467223)
- [GitHub](https://github.com/daniau23)