File size: 1,324 Bytes
35ab574
 
 
76e0e8d
35fc8e1
51f915a
35ab574
254c406
0a19ebb
8eabc29
7b642ae
35ab574
 
8de4174
35ab574
76e0e8d
35ab574
76e0e8d
35ab574
d8c6929
35ab574
5612b73
 
 
d8c6929
 
35ab574
 
5612b73
 
35ab574
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: apache-2.0
---

## BULaMU: The First Luganda LLM trained from Scratch

<center><img src="./banner.png" alt="Image" width="25%"></center>


This is the repository for BULaMU, which stands for Breakthrough in Utiliation of Large Language Models in Uganda. It is the first foundation model to be trained completely from scratch in Luganda. For the most up to date information on BULaMU please follow my [X account](https://x.com/MwebazaRick). 

This tiny language model has 20M parameters and was trained using modified scripts from [Andrej Karpathy's llama2.c repo](https://github.com/karpathy/llama2.c).
For more technical details about BULaMU, please refer to the [white paper](https://zenodo.org/records/17271688).

This repository includes both the base and finetuned models of BULaMU, which are located in "base.zip" and "finetuned.zip" respectively. I also included all the scripts needed to train your own version of BULaMU from scratch or finetune BULaMU from the checkpoint in this repo.  

## Run BULaMU

To inference the models, unzip the folder ("base.zip" or "finetuned.zip"). 

Then compile the C code like this:

```bash
make run
```

Then inference the models in the terminal like this: 
Then inference the models in the terminal like this:

```bash
./run model.bin -t 0.8 -n 384 -i "Oli Otya"
```