beshiribrahim commited on
Commit
325dc03
·
verified ·
1 Parent(s): d1191ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +121 -26
README.md CHANGED
@@ -1,47 +1,106 @@
1
- ---
2
- license: cc-by-sa-4.0
3
  language:
 
4
  - tig
5
- tags:
6
- - fasttext
7
- - word-embeddings
8
- - tigre
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- # Tigre FastText Embeddings
 
 
 
 
12
 
13
- This repository provides **FastText word embeddings for the Tigre language**.
14
- The embeddings can be used for similarity, clustering, text classification, or as input features in downstream NLP tasks.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  ---
17
 
18
- ## 📦 Installation
19
 
20
- ```bash
21
- pip install gensim fasttext huggingface_hub
22
 
23
- ```python
 
 
 
 
24
 
 
 
 
 
 
 
25
  from huggingface_hub import hf_hub_download
26
  from gensim.models import KeyedVectors
27
 
28
  # Download the vec file
29
  vec_path = hf_hub_download(
30
- repo_id="BeitTigreAI/tigre-data-fasttext",
31
  filename="tigre.vec",
32
- repo_type="dataset"
33
  )
34
 
35
  # Load embeddings
36
  model = KeyedVectors.load_word2vec_format(vec_path, binary=False)
37
 
38
  # Example queries
39
- print(model.most_similar("ቤት"))
40
- print(model.most_similar("ዋልዳይት"))
 
 
 
 
 
 
 
41
 
42
- ```css
43
- [('ወቤት', 0.54), ('ሐደክዉ', 0.50), ('ኢመሓዛትካ', 0.47), ...]
44
- [('ዋልዳይትተ', 0.94), ('ዋልዳይትናመ', 0.93), ('ከዋልዳይት', 0.93), ...]
45
 
46
  ```python
47
  from huggingface_hub import hf_hub_download
@@ -49,7 +108,7 @@ import fasttext
49
 
50
  # Download the bin file
51
  bin_path = hf_hub_download(
52
- repo_id="BeitTigreAI/tigre-data-fasttext",
53
  filename="tig.bin",
54
  repo_type="dataset"
55
  )
@@ -58,10 +117,46 @@ bin_path = hf_hub_download(
58
  ft = fasttext.load_model(bin_path)
59
 
60
  # Example queries
61
- print(ft.get_word_vector("ሻም")[:10])
62
- print(ft.get_nearest_neighbors("ሻም"))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
- ```css
65
- [-2.2306, 4.1328, -1.3079, 1.3905, -3.1971, -1.2134, 0.4555, -2.9989, -0.7958, -0.2645]
66
- [(0.55, 'ሻማት'), (0.53, 'ዴሪር'), (0.46, 'ምልህዮት'), ...]
67
 
 
 
 
 
1
  language:
2
+
3
  - tig
4
+ tags:
5
+ license: CC-BY-SA-4.0
6
+
7
+ ---
8
+
9
+ # Tigre Word Embedding Models (FastText)
10
+
11
+ | Model Name | Language | Task | License |
12
+ | :------------ | :---------- | :-------------------------------- | :----------- |
13
+ | **tig.bin** | Tigre (tig) | Word Embeddings (FastText) | CC-BY-SA-4.0 |
14
+ | **tigre.vec** | Tigre (tig) | Word Embeddings (Word2Vec format) | CC-BY-SA-4.0 |
15
+
16
+ ### Overview
17
+
18
+ This repository introduces the first comprehensive public collection of resources for the **Tigre** language — an under-resourced South Semitic language within the Afro-Asiatic family. The release aggregates multiple modalities (text + speech) and provides baseline models for several core NLP tasks including language modeling, ASR, and machine translation.
19
+
20
+ The models were trained on a substantial Tigre corpus and are valuable for any downstream Natural Language Processing (NLP) task, especially those involving this low-resource language.
21
+
22
+ ## What are FastText Embeddings?
23
+
24
+ FastText is an extension of the popular Word2Vec model, which represents words as dense, real-valued vectors in a multi-dimensional space. The key advantage of FastText is that it represents each word as a bag of character n-grams (subwords). This subword information allows the model to:
25
+
26
+ 1. Generate vectors for out-of-vocabulary (OOV) words (e.g., typos or unseen compounds) by summing the vectors of their character n-grams.
27
+ 2. Capture morphological structure, which is crucial for morphologically rich languages like Tigre, where words have complex prefixes and suffixes.
28
+
29
+ The models provided here are:
30
+
31
+ - tig.bin: The binary FastText model (full model), which allows for querying subword vectors and OOV words.
32
+ - tigre.vec: A plain text file containing only the full word vectors, compatible with tools like gensim and used for downstream tasks or visualizations.
33
+
34
  ---
35
 
36
+ ## Model Training & Data Curation
37
+
38
+ ### Corpus and Preprocessing
39
+
40
+ The model was trained on the enriched Tigre corpus provided in the BeitTigreAI/tigre-data-dictionary dataset (and others). The corpus underwent rigorous cleaning to ensure high quality:
41
 
42
+ 1. Punctuation Removal: Removal of Ge'ez punctuation (e.g., ፡, ።, ፥) and numbers.
43
+ 2. Character Filtering: Removal of any non-Ge'ez characters (U+1200–U+135F), including Latin letters and symbols.
44
+ 3. Line Chunking: The cleaned text was split into lines with a maximum of 15 words per line.
45
+
46
+ ### FastText Parameters
47
+
48
+ The model was trained using the Continuous Bag-of-Words (CBOW) architecture and aligned to the standard English FastText vector space.
49
+
50
+ | Parameter | Value | Rationale |
51
+ | ---------------------------- | ----- | ---------------------------------------------------------------------------------------------------- |
52
+ | Model | cbow | Standard choice for word embeddings. |
53
+ | Dimension (dim) | 300 | Matches the standard pre-trained English models (`cc.en.300.bin`) for later cross-lingual alignment. |
54
+ | Epochs | 10 | Standard training duration. |
55
+ | Minimum Count (minCount) | 2 | Filters out very rare words to improve robustness. |
56
+ | Min/Max N-grams (minn, maxn) | 5/5 | Uses only 5-grams to capture subword information, matching common FastText configurations. |
57
+ | Negative Sampling (neg) | 10 | Standard negative sampling rate. |
58
 
59
  ---
60
 
61
+ ### Derived Asset: Generated Dictionary
62
 
63
+ The aligned Tigre and English vector spaces were used to generate a large-scale Tigre-English dictionary, leveraging the fact that similar words in different languages should be close in the shared vector space after alignment.
 
64
 
65
+ - **Vector Alignment Method:** The Tigre and English vector spaces were aligned using the VecMap tool in a supervised manner, utilizing the existing 6,164-entry Tigre-English-Tigrinya Dictionary as a seed translation lexicon.
66
+ - **Generated Dictionary:** A new dictionary file, tig_eng_generated_dict.tsv, was created by finding the Top-1 nearest English neighbor for every unique Tigre word in the mapped Tigre vector space.
67
+ - **Entries:** This generated dictionary contains $30,000+$ entries, significantly expanding the initial seed dictionary.
68
+
69
+ ### How to Load and Use the Models
70
 
71
+ The models can be easily downloaded and loaded using the Hugging Face Hub client library, fasttext, or gensim.
72
+
73
+ 1. Using gensim (for .vec files)
74
+ The .vec file is ideal for simple embedding lookups and visualization.
75
+
76
+ ```python
77
  from huggingface_hub import hf_hub_download
78
  from gensim.models import KeyedVectors
79
 
80
  # Download the vec file
81
  vec_path = hf_hub_download(
82
+ repo_id="<Your_HF_Repo_Name>/tigre-data-fasttext", # Replace <Your_HF_Repo_Name>
83
  filename="tigre.vec",
84
+ repo_type="dataset" # Or 'model' if you prefer
85
  )
86
 
87
  # Load embeddings
88
  model = KeyedVectors.load_word2vec_format(vec_path, binary=False)
89
 
90
  # Example queries
91
+ print("Most similar to 'ቤት' (house):", model.most_similar("ቤት"))
92
+ print("Most similar to 'ዋልዳይት' (mother/parent):", model.most_similar("ዋልዳይት"))
93
+ ```
94
+
95
+ output
96
+
97
+ ```text
98
+ Most similar to 'ቤት' (house): [('ወቤት', 0.54), ('ሐደክዉ', 0.50), ('ኢመሓዛትካ', 0.47), ...]
99
+ Most similar to 'ዋልዳይት' (mother/parent): [('ዋልዳይትተ', 0.94), ('ዋልዳይትናመ', 0.93), ('ከዋልዳይት', 0.93), ...]
100
 
101
+ 2. Using fasttext (for .bin files)
102
+ The .bin file is the full FastText model, which allows you to query vectors for unseen words and character n-grams.
103
+ ```
104
 
105
  ```python
106
  from huggingface_hub import hf_hub_download
 
108
 
109
  # Download the bin file
110
  bin_path = hf_hub_download(
111
+ repo_id="<Your_HF_Repo_Name>/tigre-data-fasttext", # Replace <Your_HF_Repo_Name>
112
  filename="tig.bin",
113
  repo_type="dataset"
114
  )
 
117
  ft = fasttext.load_model(bin_path)
118
 
119
  # Example queries
120
+ print("Vector for 'ሻም':", ft.get_word_vector("ሻም")[:10])
121
+ print("Nearest neighbors for 'ሻም':", ft.get_nearest_neighbors("ሻም"))
122
+ ```
123
+
124
+ ```text
125
+ Vector for 'ሻም': [-2.2306, 4.1328, -1.3079, 1.3905, -3.1971, -1.2134, 0.4555, -2.9989, -0.7958, -0.2645]
126
+ Nearest neighbors for 'ሻም': [(0.55, 'ሻማት'), (0.53, 'ዴሪር'), (0.46, 'ምልህዮት'), ...]
127
+ ```
128
+
129
+ ## Dataset Structure
130
+
131
+ tigre-data-fasttext/
132
+ ├── README.md
133
+ ├── config.json
134
+ ├── tig.bin
135
+ ├── tigre.vec
136
+
137
+ ---
138
+
139
+ ## Bias, Risks & Known Limitations
140
+
141
+ Bias, Risks & Known Limitations
142
+ Training Corpus: The model quality is directly tied to the coverage and quality of the training corpus. While the text was extensively cleaned, any underlying limitations in the corpus's dialect, topic, or date coverage will be reflected in the embeddings.
143
+
144
+ Vector Alignment: The cross-lingual dictionary generation relies on the initial, smaller, manually curated dictionary for alignment. Performance for words that are not closely related to the seed dictionary entries may be less accurate.
145
+
146
+ English Source Bias: The initial English vocabulary for the seed dictionary was drawn from a selection of the most frequently used vocabulary found in Webster's Revised Unabridged Dictionary (1913 edition). This may result in a bias toward older or less modern English terms, which can subtly affect the vector alignment process.
147
+
148
+ ---
149
+
150
+ ## Licensing (Per Modality)
151
+
152
+ CC-BY-SA-4.0
153
+
154
+ ## Citation
155
+
156
+ The Tigre FastText Models and the derived dictionary are licensed under CC-BY-SA-4.0
157
+ If you use this resource in your work, please cite the repository by referencing its Hugging Face entry:
158
 
159
+ Recommended Citation Format:
160
+ Repository Name: Tigre Word Embedding Models (FastText)
 
161
 
162
+ URL: https://huggingface.co/datasets/BeitTigreAI/tigre-data-fasttext