Datasets:
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
๐ Training LFM2.5-1.2B-AfriBase: A Pan-African Language Model
This document details the complete training process for creating NaolBM/LFM2.5-1.2B-AfriBase - a 1.2B parameter model optimized for African languages through vocabulary extension and continued pre-training (CPT).
๐ Overview
The model extends the LFM2.5-1.2B base model with an Africa-optimized tokenizer and continued pre-training on a massive multilingual African corpus.
Key Stats:
- Base Model:
LiquidAI/LFM2.5-1.2B-Base - Source Tokenizer:
NaolBM/Africa-BBPE(50k vocab) - Final Vocab Size: 110,397 tokens
- Training Platform: Modal (H100 GPUs)
- Training Framework: Unsloth
- Training Data:
NaolBM/african-corpus(35M+ rows across 8 languages)
๐ง Step 1: Vocabulary Extension
The first step extends the base model's tokenizer with Africa-BBPE's efficient tokens.
Tokenizer Extension Code
import torch
from unsloth import FastVisionModel
from transformers import AutoTokenizer
# Configuration
BASE_MODEL_ID = "LiquidAI/LFM2.5-1.2B-Base"
SOURCE_TOKENIZER_ID = "NaolBM/Africa-BBPE"
# Load base model
model, processor = FastVisionModel.from_pretrained(
model_name=BASE_MODEL_ID,
max_seq_length=2048,
dtype=torch.float16,
load_in_4bit=True,
)
tokenizer = processor.tokenizer
# Load Africa-BBPE tokenizer
africa_tokenizer = AutoTokenizer.from_pretrained(SOURCE_TOKENIZER_ID)
# Get all tokens and decode them to readable text
new_tokens_ids = [
id for id in range(africa_tokenizer.vocab_size)
if africa_tokenizer.decode([id]) not in tokenizer.all_special_tokens
]
tokens_to_add = [africa_tokenizer.decode([id]) for id in new_tokens_ids]
tokens_to_add = list(set(tokens_to_add))
# Filter out control characters and whitespace
text_whitespace_and_controls = [
"\0", "\a", "\b", "\t", "\n", "\v", "\f", "\r", "\x1b", "\x7f",
" ", "\u00a0", "\u1680", "\u2000", "\u2001", "\u2002", "\u2003",
"\u2004", "\u2005", "\u2006", "\u2007", "\u2008", "\u2009", "\u200a",
"\u2028", "\u2029", "\u202f", "\u205f", "\u3000", "\u200b", "\u200c",
"\u200d", "\ufeff"
]
filtered_tokens = [t for t in tokens_to_add if t not in text_whitespace_and_controls]
# Add tokens to model's tokenizer
tokenizer.add_tokens(filtered_tokens)
print(f"Added {len(filtered_tokens)} new tokens")
# Resize model embeddings
old_embeddings = model.get_input_embeddings()
model.resize_token_embeddings(len(tokenizer))
print(f"Embeddings resized: {old_embeddings.num_embeddings} โ {model.get_input_embeddings().num_embeddings}")
๐ Step 2: Validation Before Training
Before pushing to Hub, validate the extended tokenizer's performance:
# Test samples across languages
test_samples = [
("Amharic", "แ แแญแ แแแ แ แขแตแฎแตแซ"),
("Swahili", "Kiswahili ni lugha ya Kibantu"),
("Hausa", "Hausa yare ne na Afro-Asiatic"),
("Oromo", "Afaan Oromoo kan namoonni"),
("Yoruba", "fun mi kekere ayแบนwo"),
("Tigrinya", "แตแแญแ แแแ แฃแฅ แคแญแตแซแ"),
("English", "what is your name"),
("Arabic", "ู
ุนุงูุฌุฉ ุงููุบุฉ ุงูุทุจูุนูุฉ"),
("Mixed", "I speak Amharic: แ แแญแ and Swahili: Kiswahili")
]
print("="*60)
print("TOKENIZER PERFORMANCE BY LANGUAGE")
print("="*60)
print(f"{'Language':<12} {'Tokens':<8} {'Chars/Token':<12} {'Efficiency'}")
for lang, text in test_samples:
tokens = tokenizer.tokenize(text)
num_tokens = len(tokens)
chars_per_token = len(text) / num_tokens
# Efficiency rating
if chars_per_token > 3.0:
rating = "๐ EXCELLENT"
elif chars_per_token > 2.0:
rating = "๐ GOOD"
elif chars_per_token > 1.0:
rating = "๐ก OK"
else:
rating = "๐ฉ POOR"
print(f"{lang:<12} {num_tokens:<8} {chars_per_token:<12.2f} {rating}")
๐ Step 3: Push to Hugging Face Hub
NEW_REPO_ID = 'NaolBM/LFM2.5-1.2B-AfriBase'
# Push model weights
model.push_to_hub(
NEW_REPO_ID,
safe_serialization=True,
private=True,
max_shard_size="2.8GB"
)
# Push tokenizer
tokenizer.push_to_hub(NEW_REPO_ID)
print(f"โ
Success! Model pushed to https://huggingface.co/{NEW_REPO_ID}")
๐ป Modal Training Setup
For continued pre-training on Modal, here's the configuration:
train_modal.py
# Generated from: cpt-with-unsloth-text-completion.ipynb
# Converted at: 2026-03-08T17:20:28.458Z
# Next step (optional): refactor into modules & generate tests with RunCell
# Quick start: pip install runcell
%uv pip install wandb --upgrade
%uv pip install unsloth
%env UNSLOTH_RETURN_LOGITS=1 # Run this to disable CCE since it is not supported for CPT
from unsloth import FastLanguageModel
from transformers import AutoTokenizer
import torch
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be False.
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "NaolBM/LFM2.5-1.2B-AfriBase", # "unsloth/mistral-7b" for 16bit loading
max_seq_length = max_seq_length,
# dtype = dtype,
load_in_4bit = load_in_4bit,
full_finetuning = True,
# device_map = "balanced",
)
# We now add LoRA adapters so we only need to update 1 to 10% of all parameters!
#
# We also add `embed_tokens` and `lm_head` to allow the model to learn out of distribution data.
model = FastLanguageModel.get_peft_model(
model,
r = 128, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
target_modules = ["q_proj", "k_proj", "v_proj", "out_proj", "in_proj",
"w1", "w2", "w3",
"embed_tokens", "lm_head",], # Add for continual pretraining
ensure_weight_tying = True, # for lm head and embed_tokens
lora_alpha = 32,
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = "none", # Supports any, but = "none" is optimized
# [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = True, # We support rank stabilized LoRA
loftq_config = None, # And LoftQ
modules_to_save=["embed_tokens", "lm_head"],
save_embedding_layers = True,
is_trainable = True
)
# <a name="Data"></a>
# ### Data Prep
# We must add `EOS_TOKEN` or `tokenizer.eos_token` or else the model's generation will go on forever.
from datasets import load_dataset, concatenate_datasets
EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN
def formatting_prompts_func(examples):
texts = examples["text"]
outputs = []
for text in texts:
# Must add EOS_TOKEN, otherwise your generation will go on forever!
text = text + EOS_TOKEN
outputs.append(text)
return { "text" : outputs, }
pass
african = load_dataset("NaolBM/african-corpus", split="train")
wiki_en = load_dataset("wikimedia/wikipedia", "20231101.en", split="train")
merged = concatenate_datasets([african, wiki_en])
dataset = merged.map(formatting_prompts_func, batched = True,)
MERGED_DATASET = dataset.shuffle(seed=3407)
# Print out 5 lines from `MERGED DATASET`
print(f"Final dataset: {len(MERGED_DATASET)} rows")
for row in MERGED_DATASET[:5]["text"]:
print("=========================")
print(row)
# <a name="Train"></a>
# ### Continued Pretraining
# Now let's use Unsloth's `UnslothTrainer`! More docs here: [TRL SFT docs](https://huggingface.co/docs/trl/sft_trainer). We do 20 steps to speed things up, but you can set `num_train_epochs=1` for a full run, and turn off `max_steps=None`.
#
# Also set `embedding_learning_rate` to be a learning rate at least 2x or 10x smaller than `learning_rate` to make continual pretraining work!
from trl import SFTTrainer
from transformers import TrainingArguments
from unsloth import UnslothTrainer, UnslothTrainingArguments
report_to = 'none' # Use trackio/wandb etc
if report_to != 'none':
os.environ["WANDB_PROJECT"] = "Kiya-CPT-Base"
os.environ["WANDB_LOG_MODEL"] = "checkpoint"
trainer = UnslothTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = MERGED_DATASET,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 8,
args = UnslothTrainingArguments(
# per_device_train_batch_size = 2,
# gradient_accumulation_steps = 8,
per_device_train_batch_size = 32,
gradient_accumulation_steps = 4,
# for 42M rows with 128 batch is around 328k steps
# max_steps = 250000,
# warmup_steps = 10,
# warmup_ratio = 0.1,
# num_train_epochs = 1,
warmup_ratio = 0.03,
num_train_epochs = 1,
# learning_rate = 5e-5,
# embedding_learning_rate = 5e-6,
learning_rate = 1e-4,
embedding_learning_rate = 1e-5,
optim = "adamw_8bit",
# weight_decay = 0.00,
weight_decay = 0.01,
lr_scheduler_type = "cosine",
seed = 3407,
output_dir = "outputs",
save_strategy = "steps",
save_steps = 500,
save_total_limit = 2,
logging_steps = 50,
report_to = report_to,
),
)
# @title Start Training
trainer_stats = trainer.train()
# trainer_stats = trainer.train(resume_from_checkpoint = True)
# <a name="Inference"></a>
# ### Inference
# Let's run the model!
#
# We first will try to see if the model follows the style and understands to write a story that is within the distribution of "Tiny Stories". Ie a story fit for a bed time story most likely.
#
# We select "Once upon a time, in a galaxy, far far away," since it normally is associated with Star Wars.
from transformers import TextIteratorStreamer
from threading import Thread
text_streamer = TextIteratorStreamer(tokenizer)
import textwrap
max_print_width = 100
# Before running inference, call `FastLanguageModel.for_inference` first
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
"แ แแจแฐแแ แญแแฐ แแณแญ แแญ แ แญแญ"
]*1, return_tensors = "pt").to("cuda")
generation_kwargs = dict(
inputs,
streamer = text_streamer,
max_new_tokens = 256,
use_cache = True,
)
thread = Thread(target = model.generate, kwargs = generation_kwargs)
thread.start()
length = 0
for j, new_text in enumerate(text_streamer):
if j == 0:
wrapped_text = textwrap.wrap(new_text, width = max_print_width)
length = len(wrapped_text[-1])
wrapped_text = "\n".join(wrapped_text)
print(wrapped_text, end = "")
else:
length += len(new_text)
if length >= max_print_width:
length = 0
print()
print(new_text, end = "")
pass
pass
# Push CPT basemodel to HF
NEW_REPO_ID = "NaolBM/Kiya-Base-1.2B"
# tokenizer.save_pretrained("lora_model")
# safetensors to hf
model.push_to_hub_merged(
NEW_REPO_ID,
tokenizer,
# save_method = "merged_4bit", # "merged_16bit"
private=True,
)
๐ Results
After vocabulary extension and CPT, the model achieves:
| Language | Chars/Token | Efficiency |
|---|---|---|
| Amharic | 3.75 | ๐ EXCELLENT |
| Swahili | 3.62 | ๐ EXCELLENT |
| Hausa | 2.64 | ๐ GOOD |
| Oromo | 5.00 | ๐ EXCELLENT |
| Yoruba | 2.11 | ๐ GOOD |
| Tigrinya | 3.40 | ๐ EXCELLENT |
| English | 2.83 | ๐ GOOD |
| Arabic | 1.24 | ๐ก OK |
| Mixed | 4.40 | ๐ EXCELLENT |
๐ Dataset Details
The training corpus NaolBM/african-corpus contains:
- Total rows: 35,344,339
- Languages: Swahili, Amharic, Hausa, English, Oromo, Yoruba, Tigrinya
- Format: Single
textcolumn with language labels
๐ Resources
- Final Model: NaolBM/LFM2.5-1.2B-AfriBase
- Base Model: LiquidAI/LFM2.5-1.2B-Base
- Source Tokenizer: NaolBM/Africa-BBPE
- Training Dataset: NaolBM/african-corpus
๐ Citation
@misc{naol2026afribase,
title={LFM2.5-1.2B-AfriBase: A 1.2B Parameter Pan-African Language Model},
author={Naol},
year={2026},
howpublished={\url{https://huggingface.co/NaolBM/LFM2.5-1.2B-AfriBase}},
}
Made with โค๏ธ for African AI ๐ช๐น๐ฐ๐ช๐ณ๐ฌ๐ฟ๐ฆ
- Downloads last month
- 8