File size: 3,413 Bytes
80afd27
 
 
 
 
 
5c8be83
 
 
80afd27
5c8be83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
635231f
 
 
 
5c8be83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
license: cc-by-sa-4.0
language:
- tig
tags:
- tigre
- language-model
- kenlm
- n-gram
---

# Tigre 3-gram Language Model (KenLM)

### Overview

This repository provides a **3-gram Language Model (LM)** for the **Tigre** language, trained using the **KenLM** toolkit. This model is a foundational resource for various downstream NLP and speech applications, including:

- Rescoring hypotheses in Automatic Speech Recognition (ASR).
- Improving text generation and fluency in Machine Translation (MT).
- Performing basic text filtering and quality control.
The model is provided in the highly optimized binary (`.arpa`) format, making it suitable for efficient use in production environments.

## Model Statistics
This language model was trained using KenLM on the **Tigre Monolingual Text Dataset (Tigre-Data 1.0)**.

| Statistic                            | Value     |
| :----------------------------------- | :-------- |
| **Model Order**                      | 3-gram    |
| **Vocabulary Size (Unique 1-grams)** | 316,548   |
| **Total Unique N-grams (1-to-3)**    | 1,285,462 |
| **Example Perplexity** (on 'ቤት')     | 147.12    |

_Note: The total raw training tokens used for this model can be found in the Tigre Monolingual Text Dataset card (approximately 14.7 million tokens)._

## Training Data Source
This model was trained exclusively on the **BeitTigreAI/tigre-data-monolingual-text** dataset.
More detailed information about the training data, including its domain, bias, preprocessing steps, and source statistics, can be found in the dataset's documentation:
[Tigre Monolingual Text Dataset README](https://huggingface.co/datasets/BeitTigreAI/tigre-data-monolingual-text/blob/main/README.md)

---

## Files and Structure

The repository contains the following files:
tigre-data-kenLM/   
├── README.md   
├── hf_readme.ipynb   
└── tigre-data-kenLM.arpa   

## How to Use the Model
You can load and query the model using the Python bindings for **KenLM** (`kenlm`).

### Installation
To use the model in Python, install the KenLM bindings:
```bash
!pip install kenlm

## Example Usage (Perplexity and Score)
The following Python code demonstrates how to load the model and query it for log probability and perplexity:

```python
import kenlm
from huggingface_hub import hf_hub_download
# 1. Download the ARPA model file from the Hugging Face Hub
arpa_path = hf_hub_download(
    repo_id="BeitTigreAI/tigre-data-kenLM",
    filename="tigre-data-kenLM.arpa",
    repo_type="model"
)
# 2. Load the KenLM model
lm = kenlm.Model(arpa_path)
# Example single sentence to score
test_sentence = "ዕርቃን ሓይልን ንግሥን" # Or use one of the lines from your list
# A. Calculate Log10 Probability of the entire sentence
log_prob = lm.score(test_sentence)
print(f"Sentence: '{test_sentence}'")
print(f"Log10 Probability: {log_prob:.4f}")
# B. Calculate Perplexity of the entire sentence
perplexity = lm.perplexity(test_sentence)
print(f"Perplexity: {perplexity:.2f}")

```

## Licensing and Citation
The Tigre 3-gram Language Model is licensed under CC-BY-SA-4.0.

## Citation
If you use this resource in your work, please cite the repository by referencing its Hugging Face entry:

## Recommended Citation Format:

## Repository Name: Tigre 3-gram Language Model (KenLM)

## Organization: BeitTigreAI
URL: https://huggingface.co/datasets/BeitTigreAI/tigre-data-kenLM