Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

BULaMU: The First Luganda LLM trained from Scratch

Image

This is the repository for BULaMU, which stands for Breakthrough in Utiliation of Large Language Models in Uganda. It is the first foundation model to be trained completely from scratch in Luganda. For the most up to date information on BULaMU please follow my X account.

This tiny language model has 20M parameters and was trained using modified scripts from Andrej Karpathy's llama2.c repo. For more technical details about BULaMU, please refer to the white paper.

This repository includes both the base and finetuned models of BULaMU, which are located in "base.zip" and "finetuned.zip" respectively. I also included all the scripts needed to train your own version of BULaMU from scratch or finetune BULaMU from the checkpoint in this repo.

Run BULaMU

To inference the models, unzip the folder ("base.zip" or "finetuned.zip").

Then compile the C code like this:

make run

Then inference the models in the terminal like this: Then inference the models in the terminal like this:

./run model.bin -t 0.8 -n 384 -i "Oli Otya"
Downloads last month
87