omarkamali commited on
Commit
93f81a9
·
verified ·
1 Parent(s): e3df544

Upload all models and assets for bo (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +260 -136
  2. models/embeddings/monolingual/bo_128d.bin +2 -2
  3. models/embeddings/monolingual/bo_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/bo_32d.bin +2 -2
  5. models/embeddings/monolingual/bo_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/bo_64d.bin +2 -2
  7. models/embeddings/monolingual/bo_64d_metadata.json +5 -3
  8. models/subword_markov/bo_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/bo_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/bo_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/bo_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/bo_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/bo_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/bo_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/bo_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/bo_2gram_subword.parquet +2 -2
  17. models/subword_ngram/bo_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/bo_3gram_subword.parquet +2 -2
  19. models/subword_ngram/bo_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/bo_4gram_subword.parquet +2 -2
  21. models/subword_ngram/bo_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/bo_tokenizer_16k.model +2 -2
  23. models/tokenizer/bo_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/bo_tokenizer_32k.model +2 -2
  25. models/tokenizer/bo_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/bo_tokenizer_64k.model +2 -2
  27. models/tokenizer/bo_tokenizer_64k.vocab +0 -0
  28. models/tokenizer/bo_tokenizer_8k.model +2 -2
  29. models/tokenizer/bo_tokenizer_8k.vocab +0 -0
  30. models/vocabulary/bo_vocabulary.parquet +2 -2
  31. models/vocabulary/bo_vocabulary_metadata.json +10 -9
  32. models/word_markov/bo_markov_ctx1_word.parquet +2 -2
  33. models/word_markov/bo_markov_ctx1_word_metadata.json +2 -2
  34. models/word_markov/bo_markov_ctx2_word.parquet +2 -2
  35. models/word_markov/bo_markov_ctx2_word_metadata.json +2 -2
  36. models/word_markov/bo_markov_ctx3_word.parquet +2 -2
  37. models/word_markov/bo_markov_ctx3_word_metadata.json +2 -2
  38. models/word_markov/bo_markov_ctx4_word.parquet +2 -2
  39. models/word_markov/bo_markov_ctx4_word_metadata.json +2 -2
  40. models/word_ngram/bo_2gram_word.parquet +2 -2
  41. models/word_ngram/bo_2gram_word_metadata.json +2 -2
  42. models/word_ngram/bo_3gram_word.parquet +2 -2
  43. models/word_ngram/bo_3gram_word_metadata.json +2 -2
  44. models/word_ngram/bo_4gram_word.parquet +2 -2
  45. models/word_ngram/bo_4gram_word_metadata.json +2 -2
  46. visualizations/embedding_isotropy.png +0 -0
  47. visualizations/embedding_norms.png +0 -0
  48. visualizations/embedding_similarity.png +2 -2
  49. visualizations/markov_branching.png +0 -0
  50. visualizations/markov_contexts.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 5.961
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.7739
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 13016
33
- generated: 2025-12-28
34
  ---
35
 
36
  # BO - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,53 +70,57 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 4.296x | 4.23 | 0.3002% | 389,363 |
76
- | **16k** | 4.846x | 4.77 | 0.3387% | 345,182 |
77
- | **32k** | 5.407x | 5.33 | 0.3779% | 309,326 |
78
- | **64k** | 5.961x 🏆 | 5.87 | 0.4166% | 280,591 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `གསལ་བྱེད་སུམ་ཅུའི་ནང་ནས་ཀ་ག་བ་ཟ་ར་ས་དྲུག་ནི་ལ་བཏགས་ཅན་གྱི་ཡི་གེ་དྲུག་གོ།
85
- ༼དུང་དཀ...`
86
 
87
  | Vocab | Tokens | Count |
88
  |-------|--------|-------|
89
- | 8k | `▁གས ལ་བྱེད་ སུམ་ ཅུའི་ ནང་ནས་ ཀ་ ་བ་ ཟ་ ར་ ... (+16 more)` | 26 |
90
- | 16k | `▁གསལ་བྱེད་ སུམ་ཅུའི་ ནང་ནས་ ཀ་ ་བ་ ཟ་ ར་ ས་ དྲུག་ ... (+13 more)` | 23 |
91
- | 32k | `▁གསལ་བྱེད་ སུམ་ཅུའི་ ནང་ནས་ ཀ་ ག་བ་ ཟ་ ར་ས་ དྲུག་ནི་ ལ་བཏགས་ ཅན་གྱི་ ... (+9 more)` | 19 |
92
- | 64k | `▁གསལ་བྱེད་ སུམ་ཅུའི་ ནང་ནས་ ཀ་ ག་བ་ ཟ་ ར་ས་ དྲུག་ནི་ ལ་བཏགས་ ཅན་གྱི་ ... (+9 more)` | 19 |
93
 
94
- **Sample 2:** `བདག་སློབ་བུས་ལུས་ངག་ཡིད་གསུམ་ཁྱོད་ལ་དུས་བརྟན་དུ་འབུལ། ང་ཚོ་མྱུར་དུ་མཇལ་འཛོམ་ཡ��...`
95
 
96
  | Vocab | Tokens | Count |
97
  |-------|--------|-------|
98
- | 8k | `▁བདག་ སློབ་བ ུས་ ལུས་ ངག་ཡིད་ གསུམ་ ཁྱོད་ལ་ དུས་ བརྟན་ དུ་ ... (+18 more)` | 28 |
99
- | 16k | `▁བདག་ སློབ་བ ུས་ ལུས་ངག་ཡིད་ གསུམ་ ཁྱོད་ལ་ དུས་ བརྟན་ དུ་ འབུལ། ... (+13 more)` | 23 |
100
- | 32k | `▁བདག་ སློབ་བ ུས་ ལུས་ངག་ཡིད་གསུམ་ ཁྱོད་ལ་ དུས་ བརྟན་ དུ་ འབུལ། ▁ང་ཚོ་ ... (+12 more)` | 22 |
101
- | 64k | `▁བདག་ སློབ་བ ུས་ ལུས་ངག་ཡིད་གསུམ་ ཁྱོད་ལ་ དུས་ བརྟན་ དུ་ འབུལ། ▁ང་ཚོ་ ... (+10 more)` | 20 |
102
 
103
- **Sample 3:** `མི་འཁྲུགས་པ་སྐྱབས་སྦྱིན་ཅན།
104
- དེང་སང་ཇོ་བོ་རིན་པོ་ཆེའི་སྐུ་རྒྱབ་ཏུ་བཞུགས་པའི་ཇོ...`
105
 
106
  | Vocab | Tokens | Count |
107
  |-------|--------|-------|
108
- | 8k | `▁མི་ འཁྲུགས་ པ་ སྐྱབས་ སྦྱིན་ ཅན། ▁དེང་སང་ ཇོ་བོ་ རིན་པོ་ཆེའི་ སྐུ་ ... (+12 more)` | 22 |
109
- | 16k | `▁མི་ འཁྲུགས་ པ་ སྐྱབས་ སྦྱིན་ ཅན། ▁དེང་སང་ ཇོ་བོ་ རིན་པོ་ཆེའི་ སྐུ་ ... (+11 more)` | 21 |
110
- | 32k | `▁མི་ འཁྲུགས་པ་ སྐྱབས་ སྦྱིན་ ཅན། ▁དེང་སང་ ཇོ་བོ་ རིན་པོ་ཆེའི་ སྐུ་ རྒྱབ་ཏུ་བ ... (+7 more)` | 17 |
111
- | 64k | `▁མི་ འཁྲུགས་པ་ སྐྱབས་སྦྱིན་ ཅན། ▁དེང་སང་ ཇོ་བོ་ རིན་པོ་ཆེའི་སྐུ་ རྒྱབ་ཏུ་བ ཞུགས་པའི་ ཇོ་བོ་ ... (+5 more)` | 15 |
112
 
113
 
114
  ### Key Findings
115
 
116
- - **Best Compression:** 64k achieves 5.961x compression
117
- - **Lowest UNK Rate:** 8k with 0.3002% unknown tokens
118
  - **Trade-off:** Larger vocabularies improve compression but increase model size
119
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
120
 
@@ -123,57 +129,89 @@ Below are sample sentences tokenized with each vocabulary size:
123
 
124
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
125
 
 
 
126
  ![N-gram Coverage](visualizations/ngram_coverage.png)
127
 
128
  ### Results
129
 
130
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
131
- |--------|------------|---------|----------------|------------------|-------------------|
132
- | **2-gram** | 397 🏆 | 8.63 | 16,730 | 60.3% | 96.5% |
133
- | **2-gram** | 259 🏆 | 8.02 | 10,716 | 68.5% | 98.4% |
134
- | **3-gram** | 2,660 | 11.38 | 107,789 | 28.7% | 70.8% |
135
- | **3-gram** | 1,576 | 10.62 | 61,755 | 31.0% | 81.3% |
136
- | **4-gram** | 14,448 | 13.82 | 410,390 | 13.4% | 41.2% |
137
- | **4-gram** | 7,647 | 12.90 | 237,121 | 14.5% | 47.5% |
138
 
139
  ### Top 5 N-grams by Size
140
 
141
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
142
 
143
  | Rank | N-gram | Count |
144
  |------|--------|-------|
145
- | 1 | `ི ་` | 1,330,962 |
146
- | 2 | `་ ས` | 923,359 |
147
- | 3 | `ན ་` | 864,506 |
148
- | 4 | `ས ་` | 804,391 |
149
- | 5 | `་ ར` | 740,793 |
150
 
151
- **3-grams:**
152
 
153
  | Rank | N-gram | Count |
154
  |------|--------|-------|
155
- | 1 | `་ ་` | 412,050 |
156
- | 2 | `ྱ ་` | 279,592 |
157
- | 3 | `ོ ་` | 249,936 |
158
- | 4 | `་ དང ་` | 232,586 |
159
- | 5 | `་ ྱ` | 225,678 |
160
 
161
- **4-grams:**
162
 
163
  | Rank | N-gram | Count |
164
  |------|--------|-------|
165
- | 1 | `་ པའ ་` | 220,243 |
166
- | 2 | `་ ི` | 185,475 |
167
- | 3 | `་ ྱ` | 176,176 |
168
- | 4 | `ཀ ་` | 145,131 |
169
- | 5 | `་ ད ་` | 133,021 |
 
 
 
 
 
 
 
 
 
 
170
 
171
 
172
  ### Key Findings
173
 
174
- - **Best Perplexity:** 2-gram with 259
175
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
176
- - **Coverage:** Top-1000 patterns cover ~47% of corpus
177
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
178
 
179
  ---
@@ -181,55 +219,86 @@ Below are sample sentences tokenized with each vocabulary size:
181
 
182
  ![Markov Entropy](visualizations/markov_entropy.png)
183
 
 
 
184
  ![Markov Branching](visualizations/markov_branching.png)
185
 
186
  ### Results
187
 
188
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
189
- |---------|-------------|------------|------------------|-----------------|----------------|
190
- | **1** | 0.2954 | 1.227 | 3.27 | 41,503 | 70.5% |
191
- | **1** | 1.4785 | 2.787 | 10.14 | 4,527 | 0.0% |
192
- | **2** | 0.2647 🏆 | 1.201 | 2.76 | 135,576 | 73.5% |
193
- | **2** | 0.6082 🏆 | 1.524 | 3.67 | 45,909 | 39.2% |
194
- | **3** | 0.2919 | 1.224 | 2.37 | 373,945 | 70.8% |
195
- | **3** | 0.5729 | 1.488 | 2.90 | 168,571 | 42.7% |
196
- | **4** | 0.3575 | 1.281 | 2.36 | 884,085 | 64.2% |
197
- | **4** | 0.4734 | 1.388 | 2.39 | 489,183 | 52.7% |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
198
 
199
- ### Generated Text Samples
200
 
201
- Below are text samples generated from each Markov chain model:
202
 
203
  **Context Size 1:**
204
 
205
- 1. `་ བ ། ཉམས ་ ན ་ འག ྲ ག ་ མ ་ ཤ ུ ང`
206
- 2. `ི ར ྐ ྱ ན ་ བས ྒ ྲ ི ་ ད ་ མ ་ གཅ`
207
- 3. `ོ ས ྙ ི ་ དང ་ ས ྲ ི ས ་ ཆ ུ ར ུ`
208
 
209
  **Context Size 2:**
210
 
211
- 1. `ི ་ ས ྒ ྲ ོ ད [ ང ] པའ ི ་ དཀར ་ ཡ ི`
212
- 2. `་ ས ླ ོ ང ་ ག ྱ ི ས ་ འབ ྱ ོ ན ་ བཅས`
213
- 3. `ན ་ ཆ ོ ས ་ རབ ་ འབ ྱ ུ ར ། ། ཁ ོ ངས`
214
 
215
  **Context Size 3:**
216
 
217
- 1. `་ པ ་ ཆ ུ བ ་ བ ྱ ེ ད ། ། ཁ ྲ ་ བ ྱ`
218
- 2. `ྱ ི ་ ཕ ་ ར ོ ལ ་ ར ླ བས ་ ཀ ྱ ི ་ ར`
219
- 3. `ོ ད ་ ཀ ྱ ི ས ་ བད ུ ན ་ ཅ ི ་ མ ི ན`
220
 
221
  **Context Size 4:**
222
 
223
- 1. `་ པའ ི ་ ར ྒ ྱ ར ་ བར ་ དཀའ ་ བའ ི ་ འཕ ྲ ུ`
224
- 2. `་ ཀ ྱ ི ་ ནང ་ ག ི ་ འབ ུ མ ་ ཡང ་ ། ། ཤ`
225
- 3. `་ ར ྒ ྱ ུ ད ་ པ ། ད ེ ་ ས ྟ ེ ། བ ྲ མ`
226
 
227
 
228
  ### Key Findings
229
 
230
- - **Best Predictability:** Context-2 with 73.5% predictability
231
  - **Branching Factor:** Decreases with context size (more deterministic)
232
- - **Memory Trade-off:** Larger contexts require more storage (489,183 contexts)
233
  - **Recommendation:** Context-3 or Context-4 for text generation
234
 
235
  ---
@@ -245,64 +314,64 @@ Below are text samples generated from each Markov chain model:
245
 
246
  | Metric | Value |
247
  |--------|-------|
248
- | Vocabulary Size | 13,016 |
249
- | Total Tokens | 18,343,262 |
250
- | Mean Frequency | 1409.29 |
251
- | Median Frequency | 3 |
252
- | Frequency Std Dev | 31347.07 |
253
 
254
  ### Most Common Words
255
 
256
  | Rank | Word | Frequency |
257
  |------|------|-----------|
258
- | 1 | | 1,859,452 |
259
- | 2 | | 1,242,082 |
260
- | 3 | | 1,225,016 |
261
- | 4 | | 1,119,190 |
262
- | 5 | | 979,844 |
263
- | 6 | | 962,286 |
264
- | 7 | | 831,513 |
265
- | 8 | | 704,629 |
266
- | 9 | | 647,161 |
267
- | 10 | | 638,483 |
268
 
269
  ### Least Common Words (from vocabulary)
270
 
271
  | Rank | Word | Frequency |
272
  |------|------|-----------|
273
- | 1 | ༡༡༦༠ | 2 |
274
- | 2 | w1kg9520 | 2 |
275
- | 3 | jayasena | 2 |
276
- | 4 | ༡༡ཙམ | 2 |
277
- | 5 | 252591 | 2 |
278
- | 6 | 2525b4 | 2 |
279
- | 7 | 2525b2 | 2 |
280
- | 8 | caryā | 2 |
281
- | 9 | gīti | 2 |
282
- | 10 | caryāgītivṛtti | 2 |
283
 
284
  ### Zipf's Law Analysis
285
 
286
  | Metric | Value |
287
  |--------|-------|
288
- | Zipf Coefficient | 1.8006 |
289
- | R² (Goodness of Fit) | 0.975288 |
290
  | Adherence Quality | **excellent** |
291
 
292
  ### Coverage Analysis
293
 
294
  | Top N Words | Coverage |
295
  |-------------|----------|
296
- | Top 100 | 92.9% |
297
- | Top 1,000 | 99.6% |
298
- | Top 5,000 | 99.9% |
299
- | Top 10,000 | 100.0% |
300
 
301
  ### Key Findings
302
 
303
- - **Zipf Compliance:** R²=0.9753 indicates excellent adherence to Zipf's law
304
- - **High Frequency Dominance:** Top 100 words cover 92.9% of corpus
305
- - **Long Tail:** 3,016 words needed for remaining 0.0% coverage
306
 
307
  ---
308
  ## 5. Word Embeddings Evaluation
@@ -315,24 +384,76 @@ Below are text samples generated from each Markov chain model:
315
 
316
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
317
 
318
- ### Model Comparison
319
 
320
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
321
- |-------|------------|-----------|----------|----------|----------|
322
- | **mono_32d** | 8,661 | 32 | 6.077 | 1.370 | 0.7739 🏆 |
323
- | **mono_64d** | 8,661 | 64 | 6.426 | 1.018 | 0.7102 |
324
- | **mono_128d** | 8,661 | 128 | 6.678 | 0.842 | 0.5344 |
325
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
326
 
327
  ### Key Findings
328
 
329
- - **Best Isotropy:** mono_32d with 0.7739 (more uniform distribution)
330
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
331
- - **Vocabulary Coverage:** All models cover 8,661 words
332
- - **Recommendation:** 100d for balanced semantic capture and efficiency
333
 
334
  ---
335
- ## 6. Summary & Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
336
 
337
  ![Performance Dashboard](visualizations/performance_dashboard.png)
338
 
@@ -340,11 +461,12 @@ Below are text samples generated from each Markov chain model:
340
 
341
  | Component | Recommended | Rationale |
342
  |-----------|-------------|-----------|
343
- | Tokenizer | **32k BPE** | Best compression (5.96x) with low UNK rate |
344
- | N-gram | **5-gram** | Lowest perplexity (259) |
345
- | Markov | **Context-4** | Highest predictability (73.5%) |
346
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
347
 
 
348
  ---
349
  ## Appendix: Metrics Glossary & Interpretation Guide
350
 
@@ -534,7 +656,8 @@ If you use these models in your research, please cite:
534
  author = {Kamali, Omar},
535
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
536
  year = {2025},
537
- publisher = {HuggingFace},
 
538
  url = {https://huggingface.co/wikilangs}
539
  institution = {Omneity Labs}
540
  }
@@ -550,7 +673,8 @@ MIT License - Free for academic and commercial use.
550
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
551
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
552
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
553
  ---
554
  *Generated by Wikilangs Models Pipeline*
555
 
556
- *Report Date: 2025-12-28 07:45:49*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 5.300
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.8494
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # BO - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 4.065x | 4.07 | 0.3680% | 234,234 |
84
+ | **16k** | 4.561x | 4.56 | 0.4129% | 208,750 |
85
+ | **32k** | 4.981x | 4.98 | 0.4510% | 191,137 |
86
+ | **64k** | 5.300x 🏆 | 5.30 | 0.4798% | 179,650 |
87
 
88
  ### Tokenization Examples
89
 
90
  Below are sample sentences tokenized with each vocabulary size:
91
 
92
+ **Sample 1:** `ཞེ་ཆེན་དགོན་ནི་ཞེ་ཆེན་རབ་འབྱམས་དང་པོ་བསྟན་པའི་རྒྱལ་མཚན་གྱིས་ཕྱག་བཏབ་པ་ཡིན།`
 
93
 
94
  | Vocab | Tokens | Count |
95
  |-------|--------|-------|
96
+ | 8k | `▁ཞེ་ ཆེན་ དགོན་ ནི་ ཞེ་ ཆེན་ རབ་འབྱམས་ དང་པོ་ བསྟན་པའི་ རྒྱལ་མཚན་ ... (+3 more)` | 13 |
97
+ | 16k | `▁ཞེ་ ཆེན་ དགོན་ནི་ ཞེ་ ཆེན་ རབ་འབྱམས་ དང་པོ་ བསྟན་པའི་ རྒྱལ་མཚན་གྱིས་ ཕྱག་བཏབ་ ... (+1 more)` | 11 |
98
+ | 32k | `▁ཞེ་ཆེན་ དགོན་ནི་ ཞེ་ཆེན་ རབ་འབྱམས་ དང་པོ་ བསྟན་པའི་ རྒྱལ་མཚན་གྱིས་ ཕྱག་བཏབ་ པ་ཡིན།` | 9 |
99
+ | 64k | `▁ཞེ་ཆེན་ དགོན་ནི་ ཞེ་ཆེན་ རབ་འབྱམས་ དང་པོ་ བསྟན་པའི་ རྒྱལ་མཚན་གྱིས་ ཕྱག་བཏབ་ པ་ཡིན།` | 9 |
100
 
101
+ **Sample 2:** `རང་གི་ཕ་མའི་ཁྱིམ་དུ་འཚོ་བ་སྐྱེལ་བའི་ཞེ་སའི་ཚིག ༼དུང་དཀར་ཚིག་མཛོད་ཆེན་མོ་༽ནས་བཏུས...`
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
+ | 8k | `▁རང་གི་ ་མའི་ ཁྱིམ་ དུ་ འཚོ་བ་ སྐྱ ེལ་བའི་ ཞེ་ སའི་ ... (+4 more)` | 14 |
106
+ | 16k | `▁རང་གི་ ཕ་མའི་ ཁྱིམ་དུ་ འཚོ་བ་ སྐྱ ེལ་བའི་ ཞེ་སའི་ ཚིག ▁༼དུང་ དཀར་ཚིག་མཛོད་ ... (+1 more)` | 11 |
107
+ | 32k | `▁རང་གི་ ཕ་མའི་ ཁྱིམ་དུ་ འཚོ་བ་ སྐྱེལ་བའི་ ཞེ་སའི་ཚིག ▁༼དུང་ དཀར་ཚིག་མཛོད་ ཆེན་མོ་༽ནས་བཏུས།` | 9 |
108
+ | 64k | `▁རང་གི་ཕ་མའི་ ཁྱིམ་དུ་ འཚོ་བ་སྐྱེལ་བའི་ ཞེ་སའི་ཚིག ▁༼དུང་ དཀར་ཚིག་མཛོད་ ཆེན་མོ་༽ནས་བཏུས།` | 7 |
109
 
110
+ **Sample 3:** `དུས་རྟག་ཏུ་སྐྱེ་འཇིག་མི་བྱེད་པ། དཔེར་ན། ནམ་མཁའ་ལྟ་བུ།`
 
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
+ | 8k | `▁དུས་ རྟག་ཏུ་ སྐྱེ་ འཇིག་ མི་ བྱེད་པ། ▁དཔེར་ན། ▁ནམ་མཁའ་ ལྟ་བུ།` | 9 |
115
+ | 16k | `▁དུས་ རྟག་ཏུ་ སྐྱེ་ འཇིག་ མི་ བྱེད་པ། ▁དཔེར་ན། ▁ནམ་མཁའ་ ལྟ་བུ།` | 9 |
116
+ | 32k | `▁དུས་ རྟག་ཏུ་ སྐྱེ་འཇིག་ མི་ བྱེད་པ། ▁དཔེར་ན། ▁ནམ་མཁའ་ ལྟ་བུ།` | 8 |
117
+ | 64k | `▁དུས་རྟག་ཏུ་ སྐྱེ་འཇིག་ མི་བྱེད་པ། ▁དཔེར་ན། ▁ནམ་མཁའ་ ལྟ་བུ།` | 6 |
118
 
119
 
120
  ### Key Findings
121
 
122
+ - **Best Compression:** 64k achieves 5.300x compression
123
+ - **Lowest UNK Rate:** 8k with 0.3680% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
 
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
132
+ ![N-gram Unique](visualizations/ngram_unique.png)
133
+
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
140
+ | **2-gram** | Word | 36,069 | 15.14 | 160,381 | 8.0% | 26.4% |
141
+ | **2-gram** | Subword | 469 🏆 | 8.87 | 14,734 | 57.9% | 90.6% |
142
+ | **3-gram** | Word | 207,650 | 17.66 | 482,234 | 3.8% | 11.0% |
143
+ | **3-gram** | Subword | 3,733 | 11.87 | 86,351 | 25.0% | 62.7% |
144
+ | **4-gram** | Word | 569,587 | 19.12 | 997,749 | 3.3% | 7.5% |
145
+ | **4-gram** | Subword | 21,504 | 14.39 | 391,088 | 12.0% | 36.1% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
+ **2-grams (Word):**
150
+
151
+ | Rank | N-gram | Count |
152
+ |------|--------|-------|
153
+ | 1 | `པ དང` | 26,663 |
154
+ | 2 | `པ ལ` | 12,165 |
155
+ | 3 | `བ དང` | 12,147 |
156
+ | 4 | `ཐམས ཅད` | 11,625 |
157
+ | 5 | `པ ནི` | 10,955 |
158
+
159
+ **3-grams (Word):**
160
+
161
+ | Rank | N-gram | Count |
162
+ |------|--------|-------|
163
+ | 1 | `སྤྱོད འཇུག གི` | 4,095 |
164
+ | 2 | `ད དུང གཟིགས` | 3,422 |
165
+ | 3 | `ཞེས བྱ བ` | 3,401 |
166
+ | 4 | `ཕྱི ཕྱོགས དྲ` | 3,394 |
167
+ | 5 | `ཕྱོགས དྲ མཐུད` | 3,394 |
168
+
169
+ **4-grams (Word):**
170
 
171
  | Rank | N-gram | Count |
172
  |------|--------|-------|
173
+ | 1 | `ཕྱི ཕྱོགས དྲ མཐུད` | 3,393 |
174
+ | 2 | `དཔྱད གཞིའི དཀར ཆག` | 3,391 |
175
+ | 3 | `ཟིན ཐོ འམ དཔྱད` | 2,805 |
176
+ | 4 | `ཐོ འམ དཔྱད གཞི` | 2,802 |
177
+ | 5 | `ད དུང གཟིགས ཕྱི` | 2,789 |
178
 
179
+ **2-grams (Subword):**
180
 
181
  | Rank | N-gram | Count |
182
  |------|--------|-------|
183
+ | 1 | `ས ་` | 1,063,653 |
184
+ | 2 | `། _` | 775,741 |
185
+ | 3 | `ང ་` | 696,058 |
186
+ | 4 | `ན ་` | 582,326 |
187
+ | 5 | `་ བ` | 571,331 |
188
 
189
+ **3-grams (Subword):**
190
 
191
  | Rank | N-gram | Count |
192
  |------|--------|-------|
193
+ | 1 | `་ ་` | 222,811 |
194
+ | 2 | `ག ་` | 205,307 |
195
+ | 3 | `། _ །` | 180,441 |
196
+ | 4 | `ས པ` | 161,245 |
197
+ | 5 | `་ ད ང` | 151,339 |
198
+
199
+ **4-grams (Subword):**
200
+
201
+ | Rank | N-gram | Count |
202
+ |------|--------|-------|
203
+ | 1 | `་ ད ང ་` | 128,716 |
204
+ | 2 | `་ པ འི ་` | 107,145 |
205
+ | 3 | `ང ་ ། _` | 84,367 |
206
+ | 4 | `ས ་ པ ་` | 74,777 |
207
+ | 5 | `་ པ ར ་` | 62,788 |
208
 
209
 
210
  ### Key Findings
211
 
212
+ - **Best Perplexity:** 2-gram (subword) with 469
213
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
214
+ - **Coverage:** Top-1000 patterns cover ~36% of corpus
215
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
216
 
217
  ---
 
219
 
220
  ![Markov Entropy](visualizations/markov_entropy.png)
221
 
222
+ ![Markov Contexts](visualizations/markov_contexts.png)
223
+
224
  ![Markov Branching](visualizations/markov_branching.png)
225
 
226
  ### Results
227
 
228
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
229
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
230
+ | **1** | Word | 0.9258 | 1.900 | 17.80 | 44,551 | 7.4% |
231
+ | **1** | Subword | 0.8301 | 1.778 | 6.82 | 8,378 | 17.0% |
232
+ | **2** | Word | 0.7004 | 1.625 | 3.77 | 792,341 | 30.0% |
233
+ | **2** | Subword | 0.4672 | 1.382 | 4.10 | 57,113 | 53.3% |
234
+ | **3** | Word | 0.2866 | 1.220 | 1.60 | 2,987,004 | 71.3% |
235
+ | **3** | Subword | 0.4482 | 1.364 | 3.28 | 233,848 | 55.2% |
236
+ | **4** | Word | 0.1070 🏆 | 1.077 | 1.17 | 4,767,837 | 89.3% |
237
+ | **4** | Subword | 0.3731 | 1.295 | 2.37 | 765,950 | 62.7% |
238
+
239
+ ### Generated Text Samples (Word-based)
240
+
241
+ Below are text samples generated from each word-based Markov chain model:
242
+
243
+ **Context Size 1:**
244
+
245
+ 1. `པ ཐམས ཅད དུ བཅུག དེ བསིལ ཟེར མི བཏུབ པའི ཚུལ ཁྲིམས རྒྱལ པོ ཆེའི`
246
+ 2. `དང ཁ སྐོང ཝེར ཐུགས རྗེ གླུའི བསྟོད ས ལ བླ མའི སྲས འཇུག གི སྐད`
247
+ 3. `ལ བཙུན བསྟེན 2 5 ལོ རྒྱུས ས བཅད རྩོམ རིག སྔགས སོ ཁོས ང མ`
248
+
249
+ **Context Size 2:**
250
+
251
+ 1. `པ དང ཉེ བའི མཆོད སྤྲིན རྣམ སྡུད དང ཞུས ཤིག བྱས བྱུང ༣༩ སྤྲིན པ བར`
252
+ 2. `པ ལ དབང ཐོབ ཤོག ཤུ དག ལི ཁྲིའི ཚོན གྱིས རབ མཛེས ཁ བའི གོས བཟང`
253
+ 3. `བ དང གཅོག པར བྱེད པ ཙམ ལ འཁྲུལ མིན རྨི སོགས རྣམ དབྱེ བཞི བའི དེ`
254
+
255
+ **Context Size 3:**
256
+
257
+ 1. `སྤྱོད འཇུག གི རྣམ བཤད ཐེག ཆེན ཆོས ཀྱི རྒྱ མཚོ སོགས བྱོན རྒྱ དཀར ནག སེར དང`
258
+ 2. `ད དུང གཟིགས ཕྱི ཕྱོགས དྲ མཐུད དབྱིན ཇིའི རླུང འཕྲིན ཀུང སིས ཉིན དེར ཉིའུ གཡོད སྐབས`
259
+ 3. `ཞེས བྱ བ བསྲུང བའི དམ ཚིག ཅན ཐུགས དམ སྐུལ བ ནི འདི ལ རྟགས ངེས པ`
260
+
261
+ **Context Size 4:**
262
+
263
+ 1. `དཔྱད གཞིའི དཀར ཆག ད དུང གཟིགས བཙན པོ རིམ བྱོན གྱི མཚན བྱང བྲག གདོང བཀྲས གླིང དབང`
264
+ 2. `ཟིན ཐོ འམ དཔྱད གཞི དཔྱད གཞིའི དཀར ཆག ད དུང གཟིགས ཕྱི ཕྱོགས དྲ མཐུད ཤིང འཛུགས དུས`
265
+ 3. `ཐོ འམ དཔྱད གཞི དཔྱད གཞིའི དཀར ཆག ད དུང གཟིགས ཕྱི ཕྱོགས དྲ མཐུད དབྱིན ཇིའི རླུང འཕྲིན`
266
+
267
 
268
+ ### Generated Text Samples (Subword-based)
269
 
270
+ Below are text samples generated from each subword-based Markov chain model:
271
 
272
  **Context Size 1:**
273
 
274
+ 1. `་ཁྲགཟུང་སྒྲིག་ཐུགནསལ་གྱུ`
275
+ 2. `ས་ན་པོ།_ལྷན་ཚད་གན་`
276
+ 3. `གཞིགས་བ་དྷྱ་བས་སྦྱིན།_`
277
 
278
  **Context Size 2:**
279
 
280
+ 1. `ས་ལ་པའི་ཐུགས་ནག་འབྱོར`
281
+ 2. `།_རབས་ཀྱིས་དཀྱིལ་སྣང་གི`
282
+ 3. `ང་མ་བུདྡྷ་ཀྵེ་ཙཱ་བར་ནས།`
283
 
284
  **Context Size 3:**
285
 
286
+ 1. `་པ་སྟེ།_ལྷ་ལྡན་ཡོང་ཡེ་ཤེས`
287
+ 2. `གས་མད་ོ_འགྱུར་རྡོ་རྗེའི་བསྲུ`
288
+ 3. `།_།རྣམ་འཆོར་བ་དང་སེམས`
289
 
290
  **Context Size 4:**
291
 
292
+ 1. `་དང་ཕྱོགས་སུ་བཞིན་ཉུལ་ཆོས`
293
+ 2. `་པའི་སེམས་སྐྱེས་ཆེན་ཨུ་ཡོན་`
294
+ 3. `ང་།_མདོ་སྡུད་པ་ཅན་ཟན་རི`
295
 
296
 
297
  ### Key Findings
298
 
299
+ - **Best Predictability:** Context-4 (word) with 89.3% predictability
300
  - **Branching Factor:** Decreases with context size (more deterministic)
301
+ - **Memory Trade-off:** Larger contexts require more storage (765,950 contexts)
302
  - **Recommendation:** Context-3 or Context-4 for text generation
303
 
304
  ---
 
314
 
315
  | Metric | Value |
316
  |--------|-------|
317
+ | Vocabulary Size | 18,720 |
318
+ | Total Tokens | 7,245,735 |
319
+ | Mean Frequency | 387.06 |
320
+ | Median Frequency | 5 |
321
+ | Frequency Std Dev | 3716.05 |
322
 
323
  ### Most Common Words
324
 
325
  | Rank | Word | Frequency |
326
  |------|------|-----------|
327
+ | 1 | | 262,584 |
328
+ | 2 | དང | 156,471 |
329
+ | 3 | | 145,900 |
330
+ | 4 | | 121,705 |
331
+ | 5 | པའི | 110,790 |
332
+ | 6 | | 88,147 |
333
+ | 7 | དེ | 78,304 |
334
+ | 8 | ནི | 74,845 |
335
+ | 9 | ཀྱི | 70,464 |
336
+ | 10 | དུ | 70,132 |
337
 
338
  ### Least Common Words (from vocabulary)
339
 
340
  | Rank | Word | Frequency |
341
  |------|------|-----------|
342
+ | 1 | པིཎྜཱརྠ | 2 |
343
+ | 2 | saṃgraha | 2 |
344
+ | 3 | kṛṣṇācārya | 2 |
345
+ | 4 | པཽཥྚཱི | 2 |
346
+ | 5 | ānanda | 2 |
347
+ | 6 | cakṣu | 2 |
348
+ | 7 | link | 2 |
349
+ | 8 | ལུམྦཱི | 2 |
350
+ | 9 | mine | 2 |
351
+ | 10 | vidhi | 2 |
352
 
353
  ### Zipf's Law Analysis
354
 
355
  | Metric | Value |
356
  |--------|-------|
357
+ | Zipf Coefficient | 2.0020 |
358
+ | R² (Goodness of Fit) | 0.960991 |
359
  | Adherence Quality | **excellent** |
360
 
361
  ### Coverage Analysis
362
 
363
  | Top N Words | Coverage |
364
  |-------------|----------|
365
+ | Top 100 | 47.5% |
366
+ | Top 1,000 | 90.4% |
367
+ | Top 5,000 | 99.1% |
368
+ | Top 10,000 | 99.7% |
369
 
370
  ### Key Findings
371
 
372
+ - **Zipf Compliance:** R²=0.9610 indicates excellent adherence to Zipf's law
373
+ - **High Frequency Dominance:** Top 100 words cover 47.5% of corpus
374
+ - **Long Tail:** 8,720 words needed for remaining 0.3% coverage
375
 
376
  ---
377
  ## 5. Word Embeddings Evaluation
 
384
 
385
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
386
 
 
387
 
388
+ ### 5.1 Cross-Lingual Alignment
389
+
390
+ > *Note: Multilingual alignment visualization not available for this language.*
391
+
392
+
393
+ ### 5.2 Model Comparison
394
+
395
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
396
+ |-------|-----------|----------|------------------|---------------|----------------|
397
+ | **mono_32d** | 32 | 0.8494 🏆 | 0.3707 | N/A | N/A |
398
+ | **mono_64d** | 64 | 0.7912 | 0.3092 | N/A | N/A |
399
+ | **mono_128d** | 128 | 0.5757 | 0.2954 | N/A | N/A |
400
 
401
  ### Key Findings
402
 
403
+ - **Best Isotropy:** mono_32d with 0.8494 (more uniform distribution)
404
+ - **Semantic Density:** Average pairwise similarity of 0.3251. Lower values indicate better semantic separation.
405
+ - **Alignment Quality:** No aligned models evaluated in this run.
406
+ - **Recommendation:** 128d aligned for best cross-lingual performance
407
 
408
  ---
409
+ ## 6. Morphological Analysis (Experimental)
410
+
411
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
412
+
413
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
414
+
415
+ ### 6.1 Productivity & Complexity
416
+
417
+ | Metric | Value | Interpretation | Recommendation |
418
+ |--------|-------|----------------|----------------|
419
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
420
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
421
+
422
+ ### 6.2 Affix Inventory (Productive Units)
423
+
424
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
425
+
426
+ *No productive affixes detected.*
427
+
428
+
429
+ ### 6.3 Bound Stems (Lexical Roots)
430
+
431
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
432
+
433
+ *No significant bound stems detected.*
434
+
435
+
436
+ ### 6.4 Affix Compatibility (Co-occurrence)
437
+
438
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
439
+
440
+ *No significant affix co-occurrences detected.*
441
+
442
+
443
+ ### 6.5 Recursive Morpheme Segmentation
444
+
445
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
446
+
447
+ *Insufficient data for recursive segmentation.*
448
+
449
+
450
+ ### 6.6 Linguistic Interpretation
451
+
452
+ > **Automated Insight:**
453
+ The language BO appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
454
+
455
+ ---
456
+ ## 7. Summary & Recommendations
457
 
458
  ![Performance Dashboard](visualizations/performance_dashboard.png)
459
 
 
461
 
462
  | Component | Recommended | Rationale |
463
  |-----------|-------------|-----------|
464
+ | Tokenizer | **64k BPE** | Best compression (5.30x) |
465
+ | N-gram | **2-gram** | Lowest perplexity (469) |
466
+ | Markov | **Context-4** | Highest predictability (89.3%) |
467
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
468
 
469
+
470
  ---
471
  ## Appendix: Metrics Glossary & Interpretation Guide
472
 
 
656
  author = {Kamali, Omar},
657
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
658
  year = {2025},
659
+ doi = {10.5281/zenodo.18073153},
660
+ publisher = {Zenodo},
661
  url = {https://huggingface.co/wikilangs}
662
  institution = {Omneity Labs}
663
  }
 
673
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
674
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
675
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
676
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
677
  ---
678
  *Generated by Wikilangs Models Pipeline*
679
 
680
+ *Report Date: 2026-01-03 07:43:59*
models/embeddings/monolingual/bo_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d4d28fd32d66a933b9d54910c53ae79e731552039f72a158a509f33345be938c
3
- size 1033296964
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d7589a8e32a4a14dff0415d4773f3b2c21bcb132b01f0cdcdbb3fd5cac2d0a7
3
+ size 1031854630
models/embeddings/monolingual/bo_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 8661
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 7310
15
  }
models/embeddings/monolingual/bo_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3f7ec38e53b69d6fe0c7bf028ecf260182dc2c464b0a3998b36285dcdcd0d790
3
- size 258645316
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25945951b2d9bd762b4aa107b6b33d9788f48752c00df6bb74be80ee7c245721
3
+ size 258240550
models/embeddings/monolingual/bo_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 8661
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 7310
15
  }
models/embeddings/monolingual/bo_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a39136f57bc0193b521701d5895d97332f84bef7738e48d579bf3e941611efb
3
- size 516862532
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05e343618ad45867434c979594e5c58131520908c00fb32eeb884a9598ae5c2d
3
+ size 516111910
models/embeddings/monolingual/bo_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 8661
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 7310
15
  }
models/subword_markov/bo_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fe6b72dd2f3cb1ce71f886ce79e1991ebc9d866ba3d8e9a4e0765a566abef832
3
- size 295033
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bca36abfeb85c32777f9e7088e594c535d131c8aea9a20d599c33315361efed
3
+ size 432607
models/subword_markov/bo_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "bo",
5
- "unique_contexts": 4527,
6
- "total_transitions": 51775331
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "bo",
5
+ "unique_contexts": 8378,
6
+ "total_transitions": 22713130
7
  }
models/subword_markov/bo_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1343c1704ab694d0457b93b45cf4513b9cc61d7f7b379f22055ad12bdb3453f7
3
- size 1399204
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9ea9da39850551788d4eaec8b6dac589a7513f5cd8a9bc6e26566f17ac36287
3
+ size 1811900
models/subword_markov/bo_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "bo",
5
- "unique_contexts": 45909,
6
- "total_transitions": 51761048
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "bo",
5
+ "unique_contexts": 57113,
6
+ "total_transitions": 22700930
7
  }
models/subword_markov/bo_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:238ba3561b2e3bbe4353470450e8e2a855ebb948f45a77fec280ddd79e71a6a7
3
- size 4147532
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:592d4d34357c96997930f7a0d9aa2f06a8cef2b6ce1424d18b9c926d509600ea
3
+ size 6648848
models/subword_markov/bo_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "bo",
5
- "unique_contexts": 168571,
6
- "total_transitions": 51746765
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "bo",
5
+ "unique_contexts": 233848,
6
+ "total_transitions": 22688730
7
  }
models/subword_markov/bo_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:350c4223d741dac9f5df6b518504739dec14c750eeba2ee92fbe4f3e7793abd0
3
- size 11386093
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6eb47b6d454ebb16771c614d17731ae5c2763137d58fcb511068e632490768da
3
+ size 18752183
models/subword_markov/bo_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "bo",
5
- "unique_contexts": 489183,
6
- "total_transitions": 51732482
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "bo",
5
+ "unique_contexts": 765950,
6
+ "total_transitions": 22676530
7
  }
models/subword_ngram/bo_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d63fe48d7e16314165daac5a09388e1993a92697e7057f3ca8766f9e8eccfe1e
3
- size 139446
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b741663b924d6701ddf3061df84aeb993129494f05dbf2c3f13b1b796a281ca7
3
+ size 216011
models/subword_ngram/bo_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "bo",
5
- "unique_ngrams": 10716,
6
- "total_ngrams": 51775331
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "bo",
5
+ "unique_ngrams": 14734,
6
+ "total_ngrams": 22713130
7
  }
models/subword_ngram/bo_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:db32beee277ce5b96116ccb7fcb3a4f8100711767d8803de99225390f28611b6
3
- size 784140
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abf34078d1e786d65a97141325854d312713c2230be0def57e7f299e03d9f735
3
+ size 1203727
models/subword_ngram/bo_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "bo",
5
- "unique_ngrams": 61755,
6
- "total_ngrams": 51761048
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "bo",
5
+ "unique_ngrams": 86351,
6
+ "total_ngrams": 22700930
7
  }
models/subword_ngram/bo_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8dfa2e76430d6ce1f681949cc2acbe14cc49ddf452d17256204e4387b0a24cc6
3
- size 2949767
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d94ccc56cc64b3abed99bb82e501c14f9d76c46160d144d960d1f21e1afd3e74
3
+ size 5706148
models/subword_ngram/bo_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "bo",
5
- "unique_ngrams": 237121,
6
- "total_ngrams": 51746765
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "bo",
5
+ "unique_ngrams": 391088,
6
+ "total_ngrams": 22688730
7
  }
models/tokenizer/bo_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fab010daa62bc41c1b8032f9d748af4fd37db53dee73a43da0d904f4823dd00e
3
- size 703550
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b80644ab76ef9f1236204c625c4e4dacd553df77f82729816a4c8648fac85d0f
3
+ size 682438
models/tokenizer/bo_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/bo_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:40f36be3bd4045efa166c3cb5f777437692dfa9587f1e56e7fbb3e723de6eba9
3
- size 1237082
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:099deda5fa2f785fb901ef55b737350aa4362ed293d20131e1a8d111b9cb3b59
3
+ size 1197306
models/tokenizer/bo_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/bo_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c954964333d2da25bea610a3945473fa0588b9e1f80593834b369fdb080402a0
3
- size 2362449
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1346dc7d382bd75b96573d9f0123d6b2b1fa17c2cdfca58ee3e0289e2c5ab80
3
+ size 2340697
models/tokenizer/bo_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/bo_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ea17dc7300f482ce3d9c095d25a6131c7ed3dd8f32ff3049fea997cc38cbb782
3
- size 454260
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c734632bf663d3ae5074f8a9e3e315b68abee224608a9a41d7a6cad565ec452
3
+ size 442308
models/tokenizer/bo_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/bo_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7eeece097f87af2ab0ce9e55f44cb872a6e4f828118a6c10d018670e7b075bbb
3
- size 213596
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfa52cc050640e4828a3dd12a129e73d77eccd600656e7e8ecac1d73d7e04695
3
+ size 326513
models/vocabulary/bo_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "bo",
3
- "vocabulary_size": 13016,
 
4
  "statistics": {
5
- "type_token_ratio": 0.0022376683701338307,
6
  "coverage": {
7
- "top_100": 0.9279372697332342,
8
- "top_1000": 0.9940718580638173,
9
- "top_5000": 0.9973993752774359,
10
- "top_10000": 0.9981424886732634
11
  },
12
- "hapax_count": 28093,
13
- "hapax_ratio": 0.6833783356442629,
14
- "total_documents": 14283
15
  }
16
  }
 
1
  {
2
  "language": "bo",
3
+ "vocabulary_size": 18720,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.006158586066529067,
7
  "coverage": {
8
+ "top_100": 0.4733026861716062,
9
+ "top_1000": 0.9009478947369145,
10
+ "top_5000": 0.9874471227821341,
11
+ "top_10000": 0.9934588401027036
12
  },
13
+ "hapax_count": 26064,
14
+ "hapax_ratio": 0.5819935691318328,
15
+ "total_documents": 12200
16
  }
17
  }
models/word_markov/bo_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:51a4478f179efe4762715a399df064c2175ac4508700aa5231ca3f2241d9baa8
3
- size 1444473
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28be79310aca5cc300584166fdaf88159c29ab0b3c1a8a528b951d63a306c412
3
+ size 3808759
models/word_markov/bo_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "bo",
5
- "unique_contexts": 41503,
6
- "total_transitions": 43169277
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "bo",
5
+ "unique_contexts": 44551,
6
+ "total_transitions": 7259599
7
  }
models/word_markov/bo_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1f391dd5db47185870344cb2760e2499b029b58579a143fa623dcb9747232575
3
- size 3531426
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4348c33a538330f70aeb91a4ca305d25fef537360b7e48e26baa9e5158477d4
3
+ size 25145984
models/word_markov/bo_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "bo",
5
- "unique_contexts": 135576,
6
- "total_transitions": 43154994
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "bo",
5
+ "unique_contexts": 792341,
6
+ "total_transitions": 7247399
7
  }
models/word_markov/bo_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0ed95c18ea1ad5d5136b568a6334b628dd4b3b1e75fc8e5b4f09af2322d1f4d2
3
- size 9151444
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad9f484c6f87c9af2e6a0b055908c920cdb212dd93e7334c0fb71cd19796ca1d
3
+ size 68659503
models/word_markov/bo_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "bo",
5
- "unique_contexts": 373945,
6
- "total_transitions": 43140711
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "bo",
5
+ "unique_contexts": 2987004,
6
+ "total_transitions": 7235199
7
  }
models/word_markov/bo_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d7238c4b068f37f9deb6d6f25e25cd27d86fbecff6228c94b188cb8f861f7f7c
3
- size 22175465
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eab3f700236e7f79ba6cb9a77838c8e911d34df7c8d1732a434003e8c79fa6da
3
+ size 103278579
models/word_markov/bo_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "bo",
5
- "unique_contexts": 884085,
6
- "total_transitions": 43126429
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "bo",
5
+ "unique_contexts": 4767837,
6
+ "total_transitions": 7222999
7
  }
models/word_ngram/bo_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b9b0fcbc93bc920804d2780f7b1ac6cc27c0d8317fc2680aa979f081e0f102a1
3
- size 240285
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b215d956ef6128f37f2fe173259f0c1760c5ae477fe06b994f2786241aedfd0a
3
+ size 2552232
models/word_ngram/bo_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "bo",
5
- "unique_ngrams": 16730,
6
- "total_ngrams": 43169277
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "bo",
5
+ "unique_ngrams": 160381,
6
+ "total_ngrams": 7259599
7
  }
models/word_ngram/bo_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e71ca878d82f6d8e0a1f3f01aa1350846d2870fdadfca3da38183eb64306aa9b
3
- size 1495235
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03fad5b4d1bd8fb2ce2aabbabf517f401438082e42441b024de495262c9ed34a
3
+ size 8849454
models/word_ngram/bo_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "bo",
5
- "unique_ngrams": 107789,
6
- "total_ngrams": 43154994
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "bo",
5
+ "unique_ngrams": 482234,
6
+ "total_ngrams": 7247399
7
  }
models/word_ngram/bo_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3c6d114cdab9167d30db5263ec4923cc3598ee22259e5e41b51e2a9f26d24919
3
- size 5818114
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf1b2e7a1bc72005dad39eabec92905b1730dd8e0a888c20d7b57ecb6e67aa90
3
+ size 19665718
models/word_ngram/bo_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "bo",
5
- "unique_ngrams": 410390,
6
- "total_ngrams": 43140711
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "bo",
5
+ "unique_ngrams": 997749,
6
+ "total_ngrams": 7235199
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: a71b56edaff4d0d766e2739a143438ae7271974c8743ecc26223f882e1dc1b37
  • Pointer size: 131 Bytes
  • Size of remote file: 150 kB

Git LFS Details

  • SHA256: 2274df780fcf93e823114b610a0a76feb0ec76adff44c39799501a6cf0cbe352
  • Pointer size: 131 Bytes
  • Size of remote file: 142 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED