Built with Axolotl

See axolotl config

axolotl version: 0.14.0.dev0

base_model: microsoft/Phi-4-mini-instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

# 1. Dataset Configuration
datasets:
  - path: DannyAI/African-History-QA-Dataset
    split: train
    type: alpaca_chat.load_qa
    system_prompt: "You are a helpful AI assistant specialised in African history."
test_datasets:
  - path: DannyAI/African-History-QA-Dataset
    split: validation
    type: alpaca_chat.load_qa
    system_prompt: "You are a helpful AI assistant specialised in African history."

# 2. Chat Configuration
chat_template: tokenizer_default
train_on_inputs: false

# 3. Batch Size Configuration
micro_batch_size: 2
gradient_accumulation_steps: 4 # Axolotl will calculate: total_batch_size = 2 * 4 * 1 GPU = 8

# 4. LoRA Configuration
adapter: lora
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: [q_proj, v_proj, k_proj, o_proj]

# 5. Hardware & Efficiency
sequence_len: 2048
sample_packing: true
eval_sample_packing: false 
pad_to_sequence_len: true
bf16: true
fp16: false

# 6. Training Duration
max_steps: 650  
# removed
# num_epochs: 
warmup_steps: 20
learning_rate: 0.00002
optimizer: adamw_torch 
lr_scheduler: cosine

# 7. Logging & DeepSpeed
deepspeed: using_axolotl/ds_config_2.json 
wandb_project: phi4_african_history
wandb_name: phi4_axolotl_stage2

eval_strategy: steps
eval_steps: 50
save_strategy: steps
save_steps: 100
logging_steps: 5

# 8. Public Hugging Face Hub Upload
hub_model_id: DannyAI/phi4_african_history_lora_ds2_axolotl
push_adapter_to_hub: true
hub_private_repo: false

Model Card for Model ID

This is a LoRA fine-tuned version of microsoft/Phi-4-mini-instruct for African History using the DannyAI/African-History-QA-Dataset dataset. It achieves a loss value of 1.7608 on the validation set

Model Details

Model Description

  • Developed by: Daniel Ihenacho
  • Funded by: Daniel Ihenacho
  • Shared by: Daniel Ihenacho
  • Model type: Text Generation
  • Language(s) (NLP): English
  • License: mit
  • Finetuned from model: microsoft/Phi-4-mini-instruct

Uses

This can be used for QA datasets about African History

Out-of-Scope Use

Can be used beyond African History but should not.

How to Get Started with the Model

from transformers import pipeline
from transformers import (
    AutoTokenizer, 
    AutoModelForCausalLM)
from peft import PeftModel


model_id = "microsoft/Phi-4-mini-instruct"

tokeniser = AutoTokenizer.from_pretrained(model_id)

# load base model
model  = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map = "auto",
    torch_dtype = torch.bfloat16,
    trust_remote_code = False
)

# Load the fine-tuned LoRA model
lora_id = "DannyAI/phi4_african_history_lora_ds2_axolotl"
lora_model = PeftModel.from_pretrained(
    model,lora_id
)

generator = pipeline(
    "text-generation",
    model=lora_model,
    tokenizer=tokeniser,
)
question = "What is the significance of African feminist scholarly activism in contemporary resistance movements?"
def generate_answer(question)->str:
    """Generates an answer for the given question using the fine-tuned LoRA model.
    """
    messages = [
        {"role": "system", "content": "You are a helpful AI assistant specialised in African history which gives concise answers to questions asked."},
        {"role": "user", "content": question}
    ]
    
    output = generator(
        messages, 
        max_new_tokens=2048, 
        temperature=0.1, 
        do_sample=False,
        return_full_text=False
    )
    return output[0]['generated_text'].strip()
# Example output
African feminist scholarly activism is significant in contemporary resistance movements as it provides a critical framework for understanding and addressing the specific challenges faced by African women in the context of global capitalism, neocolonialism, and patriarchal structures.

Training Details

Training results

Training Loss Epoch Step Validation Loss Ppl Active (gib) Allocated (gib) Reserved (gib)
No log 0 0 2.1261 8.3822 14.81 14.81 15.32
5.5167 3.8627 50 2.1056 8.2118 14.82 14.82 31.8
4.5059 7.7059 100 2.0382 7.6764 14.82 14.82 31.82
3.8251 11.5490 150 1.9809 7.2491 14.82 14.82 31.82
3.4152 15.3922 200 1.9343 6.9193 14.82 14.82 31.82
3.1617 19.2353 250 1.8731 6.5085 14.82 14.82 31.82
2.9075 23.0784 300 1.8246 6.2002 14.82 14.82 31.82
2.8267 26.9412 350 1.7945 6.0164 14.82 14.82 31.82
2.7239 30.7843 400 1.7794 5.9262 14.82 14.82 31.82
2.7275 34.6275 450 1.7697 5.8690 14.82 14.82 31.82
2.6912 38.4706 500 1.7634 5.8325 14.82 14.82 31.82
2.6632 42.3137 550 1.7618 5.8227 14.82 14.82 31.82
2.6604 46.1569 600 1.7609 5.8179 14.82 14.82 31.82
2.6795 50.0 650 1.7608 5.8168 14.82 14.82 31.82

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 20
  • training_steps: 650

Lora Configuration

  • r: 8
  • lora_alpha: 16
  • target_modules: ["q_proj", "v_proj", "k_proj", "o_proj"]
  • lora_dropout: 0.05 # dataset is small, hence a low dropout value
  • bias: "none"
  • task_type: "CAUSAL_LM"

Evaluation

Metrics

Models Bert Score TinyMMLU TinyTrufulQA
Base model 0.88868 0.6837 0.49745
Fine tuned Model 0.88872 0.67371 0.46877

Compute Infrastructure

Runpod.

Hardware

Runpod A40 GPU instance

Framework versions

  • PEFT 0.18.1
  • Transformers 4.57.6
  • Pytorch 2.9.1+cu128
  • Datasets 4.5.0
  • Tokenizers 0.22.2

Citation

If you use this dataset, please cite:

@Model{
Ihenacho2026phi4_african_history_lora_ds2_axolotl,
  author    = {Daniel Ihenacho},
  title     = {phi4_african_history_lora_ds2_axolotl},
  year      = {2026},
  publisher = {Hugging Face Models},
  url       = {https://huggingface.co/DannyAI/phi4_african_history_lora_ds2_axolotl},
  urldate   = {2026-01-27},
}

Model Card Authors

Daniel Ihenacho

Model Card Contact

Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DannyAI/phi4_african_history_lora_ds2_axolotl

Adapter
(165)
this model

Dataset used to train DannyAI/phi4_african_history_lora_ds2_axolotl