File size: 7,927 Bytes
2d60f5d
d2c3158
 
2d60f5d
325dc03
 
 
 
 
 
 
 
 
 
 
88ddfc9
325dc03
 
 
 
88ddfc9
 
325dc03
 
 
 
4104511
 
325dc03
 
773ac0d
 
325dc03
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
773ac0d
d2c3158
2c6ae34
325dc03
773ac0d
325dc03
 
 
 
 
 
2c6ae34
325dc03
 
 
 
 
2c6ae34
 
 
773ac0d
2c6ae34
cb3889b
2c6ae34
325dc03
2c6ae34
773ac0d
2c6ae34
773ac0d
325dc03
 
 
 
 
 
 
 
2c6ae34
325dc03
 
 
2c6ae34
d1191ff
 
 
 
 
 
88ddfc9
d1191ff
 
 
 
 
 
 
 
325dc03
 
 
 
 
 
 
 
 
 
 
88ddfc9
 
 
 
 
325dc03
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d60f5d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
language:
- tig
'-license': cc-by-sa-4.0
---

# Tigre Word Embedding Models (FastText)

| Model Name    | Language    | Task                              | License      |
| :------------ | :---------- | :-------------------------------- | :----------- |
| **tig.bin**   | Tigre (tig) | Word Embeddings (FastText)        | CC-BY-SA-4.0 |
| **tigre.vec** | Tigre (tig) | Word Embeddings (Word2Vec format) | CC-BY-SA-4.0 |

### Overview

This repository introduces the first comprehensive public collection of resources for the Tigre language — an under-resourced South Semitic language within the Afro-Asiatic family. The release aggregates multiple modalities (text + speech) and provides baseline models for several core NLP tasks including language modeling, ASR, and machine translation.
The models were trained on a substantial Tigre corpus and are valuable for any downstream Natural Language Processing (NLP) task, especially those involving this low-resource language.

## What are FastText Embeddings?

FastText is an extension of the popular Word2Vec model, which represents words as dense, real-valued vectors in a multi-dimensional space. 
The key advantage of FastText is that it represents each word as a bag of character n-grams (subwords). This subword information allows the model to:

1. Generate vectors for out-of-vocabulary (OOV) words (e.g., typos or unseen compounds) by summing the vectors of their character n-grams.
2. Capture morphological structure, which is crucial for morphologically rich languages like Tigre, where words have complex prefixes and suffixes.

### The models provided here are:

- tig.bin: The binary FastText model (full model), which allows for querying subword vectors and OOV words.
- tigre.vec: A plain text file containing only the full word vectors, compatible with tools like gensim and used for downstream tasks or visualizations.
---

## Model Training & Data Curation

### Corpus and Preprocessing

The model was trained on the enriched Tigre corpus provided in the BeitTigreAI/tigre-data-dictionary dataset (and others). The corpus underwent rigorous cleaning to ensure high quality:
1. Punctuation Removal: Removal of Ge'ez punctuation (e.g., ፡, ።, ፥) and numbers.
2. Character Filtering: Removal of any non-Ge'ez characters (U+1200–U+135F), including Latin letters and symbols.
3. Line Chunking: The cleaned text was split into lines with a maximum of 15 words per line.

### FastText Parameters

The model was trained using the Continuous Bag-of-Words (CBOW) architecture and aligned to the standard English FastText vector space.

| Parameter                    | Value | Rationale                                                                                            |
| ---------------------------- | ----- | ---------------------------------------------------------------------------------------------------- |
| Model                        | cbow  | Standard choice for word embeddings.                                                                 |
| Dimension (dim)              | 300   | Matches the standard pre-trained English models (`cc.en.300.bin`) for later cross-lingual alignment. |
| Epochs                       | 10    | Standard training duration.                                                                          |
| Minimum Count (minCount)     | 2     | Filters out very rare words to improve robustness.                                                   |
| Min/Max N-grams (minn, maxn) | 5/5   | Uses only 5-grams to capture subword information, matching common FastText configurations.           |
| Negative Sampling (neg)      | 10    | Standard negative sampling rate.                                                                     |

---

### Derived Asset: Generated Dictionary

The aligned Tigre and English vector spaces were used to generate a large-scale Tigre-English dictionary, leveraging the fact that similar words in different languages should be close in the shared vector space after alignment.
- **Vector Alignment Method:** The Tigre and English vector spaces were aligned using the VecMap tool in a supervised manner, utilizing the existing 6,164-entry Tigre-English-Tigrinya Dictionary as a seed translation lexicon.
- **Generated Dictionary:** A new dictionary file, tig_eng_generated_dict.tsv, was created by finding the Top-1 nearest English neighbor for every unique Tigre word in the mapped Tigre vector space.
- **Entries:** This generated dictionary contains $30,000+$ entries, significantly expanding the initial seed dictionary.

### How to Load and Use the Models

The models can be easily downloaded and loaded using the Hugging Face Hub client library, fasttext, or gensim.
1. Using gensim (for .vec files)
   The .vec file is ideal for simple embedding lookups and visualization.

```python
from huggingface_hub import hf_hub_download
from gensim.models import KeyedVectors

# Download the vec file
vec_path = hf_hub_download(
    repo_id="BeitTigreAI/tigre-data-fasttext", 
    filename="tigre.vec",
    repo_type="dataset" # Or 'model' if you prefer
)
# Load embeddings
model = KeyedVectors.load_word2vec_format(vec_path, binary=False)
# Example queries
print("Most similar to 'ቤት' (house):", model.most_similar("ቤት"))
print("Most similar to 'ዋልዳይት' (mother/parent):", model.most_similar("ዋልዳይት"))
```
output

```text
Most similar to 'ቤት' (house): [('ወቤት', 0.54), ('ሐደክዉ', 0.50), ('ኢመሓዛትካ', 0.47), ...]
Most similar to 'ዋልዳይት' (mother/parent): [('ዋልዳይትተ', 0.94), ('ዋልዳይትናመ', 0.93), ('ከዋልዳይት', 0.93), ...]

2. Using fasttext (for .bin files)
The .bin file is the full FastText model, which allows you to query vectors for unseen words and character n-grams.
```

```python
from huggingface_hub import hf_hub_download
import fasttext

# Download the bin file
bin_path = hf_hub_download(
    repo_id="BeitTigreAI/tigre-data-fasttext", 
    filename="tig.bin",
    repo_type="dataset"
)

# Load model
ft = fasttext.load_model(bin_path)

# Example queries
print("Vector for 'ሻም':", ft.get_word_vector("ሻም")[:10])
print("Nearest neighbors for 'ሻም':", ft.get_nearest_neighbors("ሻም"))
```

```text
Vector for 'ሻም': [-2.2306,  4.1328, -1.3079,  1.3905, -3.1971, -1.2134, 0.4555, -2.9989, -0.7958, -0.2645]
Nearest neighbors for 'ሻም': [(0.55, 'ሻማት'), (0.53, 'ዴሪር'), (0.46, 'ምልህዮት'), ...]
```

## Dataset Structure

tigre-data-fasttext/  
├── README.md  
├── config.json  
├── tig.bin  
├── tigre.vec  

---

## Bias, Risks & Known Limitations

Bias, Risks & Known Limitations
Training Corpus: The model quality is directly tied to the coverage and quality of the training corpus. While the text was extensively cleaned, any underlying limitations in the corpus's dialect, topic, or date coverage will be reflected in the embeddings.
Vector Alignment: The cross-lingual dictionary generation relies on the initial, smaller, manually curated dictionary for alignment. Performance for words that are not closely related to the seed dictionary entries may be less accurate.
English Source Bias: The initial English vocabulary for the seed dictionary was drawn from a selection of the most frequently used vocabulary found in Webster's Revised Unabridged Dictionary (1913 edition). This may result in a bias toward older or less modern English terms, which can subtly affect the vector alignment process.

---

## Licensing (Per Modality)
CC-BY-SA-4.0

## Citation

The Tigre FastText Models and the derived dictionary are licensed under CC-BY-SA-4.0
If you use this resource in your work, please cite the repository by referencing its Hugging Face entry:
Recommended Citation Format:
Repository Name: Tigre Word Embedding Models (FastText)
URL: https://huggingface.co/datasets/BeitTigreAI/tigre-data-fasttext